Test Report: Docker_Linux_crio_arm64 19780

                    
                      d63f64bffc284d34b6c2581e44dece8bfcca0b7a:2024-10-09:36574
                    
                

Test fail (3/328)

Order failed test Duration
32 TestAddons/serial/GCPAuth/PullSecret 480.93
35 TestAddons/parallel/Ingress 153.26
37 TestAddons/parallel/MetricsServer 336.3
x
+
TestAddons/serial/GCPAuth/PullSecret (480.93s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-527950 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-527950 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4d0d58d2-320c-461e-8322-7e21e9176927] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/serial/GCPAuth/PullSecret: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:627: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:627: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-527950 -n addons-527950
addons_test.go:627: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-09 18:58:32.663603368 +0000 UTC m=+730.486664220
addons_test.go:627: (dbg) Run:  kubectl --context addons-527950 describe po busybox -n default
addons_test.go:627: (dbg) kubectl --context addons-527950 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-527950/192.168.49.2
Start Time:       Wed, 09 Oct 2024 18:50:32 +0000
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.21
IPs:
IP:  10.244.0.21
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh45z (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lh45z:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m                      default-scheduler  Successfully assigned default/busybox to addons-527950
Normal   Pulling    6m31s (x4 over 8m)      kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m31s (x4 over 8m)      kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning  Failed     6m31s (x4 over 8m)      kubelet            Error: ErrImagePull
Warning  Failed     6m15s (x6 over 7m59s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m47s (x21 over 7m59s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:627: (dbg) Run:  kubectl --context addons-527950 logs busybox -n default
addons_test.go:627: (dbg) Non-zero exit: kubectl --context addons-527950 logs busybox -n default: exit status 1 (109.449868ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:627: kubectl --context addons-527950 logs busybox -n default: exit status 1
addons_test.go:629: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.93s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (153.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-527950 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-527950 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-527950 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c27f81c3-b685-489f-9fca-c5cd9e69891f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c27f81c3-b685-489f-9fca-c5cd9e69891f] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003389088s
I1009 19:01:26.884446  303278 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-527950 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.737377622s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-527950 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-527950
helpers_test.go:235: (dbg) docker inspect addons-527950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553",
	        "Created": "2024-10-09T18:47:05.648185716Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-09T18:47:05.802356014Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553/hostname",
	        "HostsPath": "/var/lib/docker/containers/2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553/hosts",
	        "LogPath": "/var/lib/docker/containers/2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553/2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553-json.log",
	        "Name": "/addons-527950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-527950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-527950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/57d52d8957517c47a94d772a4ec553e1973f6b4b31859c22ead328b38e49865f-init/diff:/var/lib/docker/overlay2/32ad11673c72cdd61b2cbdcf2c702ee1fe66adabc05fc451cdf50fb47fc60aee/diff",
	                "MergedDir": "/var/lib/docker/overlay2/57d52d8957517c47a94d772a4ec553e1973f6b4b31859c22ead328b38e49865f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/57d52d8957517c47a94d772a4ec553e1973f6b4b31859c22ead328b38e49865f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/57d52d8957517c47a94d772a4ec553e1973f6b4b31859c22ead328b38e49865f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-527950",
	                "Source": "/var/lib/docker/volumes/addons-527950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-527950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-527950",
	                "name.minikube.sigs.k8s.io": "addons-527950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4df18c8347fba75af91bc3ac819789f4ff4d4b035ff43dd1ad95716976a94617",
	            "SandboxKey": "/var/run/docker/netns/4df18c8347fb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-527950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "764ce54092e8dc7bf53c454d0b923423aa19daaa0febb9374339d91784840cb0",
	                    "EndpointID": "edab63be22d86ee9116a85700006d56d2f146a683906f178f24b04f3444ce549",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-527950",
	                        "2dc08be67930"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-527950 -n addons-527950
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-527950 logs -n 25: (1.588425546s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-041328                                                                     | download-only-041328   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| delete  | -p download-only-405051                                                                     | download-only-405051   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| start   | --download-only -p                                                                          | download-docker-506477 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | download-docker-506477                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-506477                                                                   | download-docker-506477 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-492289   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | binary-mirror-492289                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35183                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-492289                                                                     | binary-mirror-492289   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| addons  | disable dashboard -p                                                                        | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | addons-527950                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | addons-527950                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-527950 --wait=true                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:50 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:50 UTC | 09 Oct 24 18:50 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | -p addons-527950                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:59 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-527950 ip                                                                            | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-527950 addons                                                                        | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-527950 ssh cat                                                                       | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | /opt/local-path-provisioner/pvc-78e4294a-ee74-4947-a0a7-ae40d0f13e44_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-527950 addons                                                                        | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 19:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-527950 addons                                                                        | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 19:00 UTC | 09 Oct 24 19:00 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-527950 addons                                                                        | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 19:00 UTC | 09 Oct 24 19:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-527950 addons                                                                        | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 19:01 UTC | 09 Oct 24 19:01 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-527950 ssh curl -s                                                                   | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 19:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-527950 ip                                                                            | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 19:03 UTC | 09 Oct 24 19:03 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:46:40
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:46:40.967386  304033 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:46:40.967587  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:40.967614  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:46:40.967632  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:40.967935  304033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
	I1009 18:46:40.968422  304033 out.go:352] Setting JSON to false
	I1009 18:46:40.969276  304033 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8948,"bootTime":1728490653,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 18:46:40.969376  304033 start.go:139] virtualization:  
	I1009 18:46:40.972220  304033 out.go:177] * [addons-527950] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1009 18:46:40.974908  304033 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 18:46:40.974957  304033 notify.go:220] Checking for updates...
	I1009 18:46:40.977578  304033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:46:40.979594  304033 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig
	I1009 18:46:40.981442  304033 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube
	I1009 18:46:40.983587  304033 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 18:46:40.985813  304033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:46:40.987817  304033 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:46:41.013393  304033 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 18:46:41.013526  304033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:41.080358  304033 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-09 18:46:41.070449123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:41.080468  304033 docker.go:318] overlay module found
	I1009 18:46:41.082663  304033 out.go:177] * Using the docker driver based on user configuration
	I1009 18:46:41.084439  304033 start.go:297] selected driver: docker
	I1009 18:46:41.084461  304033 start.go:901] validating driver "docker" against <nil>
	I1009 18:46:41.084476  304033 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:46:41.085095  304033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:41.134759  304033 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-09 18:46:41.124919639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:41.134963  304033 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:46:41.135184  304033 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:46:41.137132  304033 out.go:177] * Using Docker driver with root privileges
	I1009 18:46:41.139121  304033 cni.go:84] Creating CNI manager for ""
	I1009 18:46:41.139197  304033 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:46:41.139211  304033 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:46:41.139304  304033 start.go:340] cluster config:
	{Name:addons-527950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-527950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:46:41.141358  304033 out.go:177] * Starting "addons-527950" primary control-plane node in "addons-527950" cluster
	I1009 18:46:41.143582  304033 cache.go:121] Beginning downloading kic base image for docker with crio
	I1009 18:46:41.145591  304033 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1009 18:46:41.147426  304033 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:46:41.147480  304033 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1009 18:46:41.147493  304033 cache.go:56] Caching tarball of preloaded images
	I1009 18:46:41.147491  304033 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1009 18:46:41.147575  304033 preload.go:172] Found /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 18:46:41.147585  304033 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 18:46:41.148086  304033 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/config.json ...
	I1009 18:46:41.148129  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/config.json: {Name:mk2ee10dbe477e541f5b1df0f33b07ee974c06c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:41.161424  304033 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1009 18:46:41.161534  304033 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1009 18:46:41.161554  304033 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1009 18:46:41.161560  304033 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1009 18:46:41.161567  304033 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1009 18:46:41.161572  304033 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from local cache
	I1009 18:46:58.564330  304033 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from cached tarball
	I1009 18:46:58.564369  304033 cache.go:194] Successfully downloaded all kic artifacts
	I1009 18:46:58.564399  304033 start.go:360] acquireMachinesLock for addons-527950: {Name:mk47047584b5ff43fa0debdcf458de7b2e027c65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:46:58.564517  304033 start.go:364] duration metric: took 94.834µs to acquireMachinesLock for "addons-527950"
	I1009 18:46:58.564549  304033 start.go:93] Provisioning new machine with config: &{Name:addons-527950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-527950 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:46:58.564621  304033 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:46:58.567508  304033 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1009 18:46:58.567771  304033 start.go:159] libmachine.API.Create for "addons-527950" (driver="docker")
	I1009 18:46:58.567807  304033 client.go:168] LocalClient.Create starting
	I1009 18:46:58.567942  304033 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca.pem
	I1009 18:46:58.681112  304033 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/cert.pem
	I1009 18:46:59.329402  304033 cli_runner.go:164] Run: docker network inspect addons-527950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:46:59.344830  304033 cli_runner.go:211] docker network inspect addons-527950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:46:59.344915  304033 network_create.go:284] running [docker network inspect addons-527950] to gather additional debugging logs...
	I1009 18:46:59.344935  304033 cli_runner.go:164] Run: docker network inspect addons-527950
	W1009 18:46:59.361259  304033 cli_runner.go:211] docker network inspect addons-527950 returned with exit code 1
	I1009 18:46:59.361290  304033 network_create.go:287] error running [docker network inspect addons-527950]: docker network inspect addons-527950: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-527950 not found
	I1009 18:46:59.361305  304033 network_create.go:289] output of [docker network inspect addons-527950]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-527950 not found
	
	** /stderr **
	I1009 18:46:59.361412  304033 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:46:59.378570  304033 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c6740}
	I1009 18:46:59.378626  304033 network_create.go:124] attempt to create docker network addons-527950 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:46:59.378691  304033 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-527950 addons-527950
	I1009 18:46:59.446268  304033 network_create.go:108] docker network addons-527950 192.168.49.0/24 created
	I1009 18:46:59.446303  304033 kic.go:121] calculated static IP "192.168.49.2" for the "addons-527950" container
	I1009 18:46:59.446378  304033 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:46:59.461021  304033 cli_runner.go:164] Run: docker volume create addons-527950 --label name.minikube.sigs.k8s.io=addons-527950 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:46:59.476583  304033 oci.go:103] Successfully created a docker volume addons-527950
	I1009 18:46:59.476682  304033 cli_runner.go:164] Run: docker run --rm --name addons-527950-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-527950 --entrypoint /usr/bin/test -v addons-527950:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1009 18:47:01.523666  304033 cli_runner.go:217] Completed: docker run --rm --name addons-527950-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-527950 --entrypoint /usr/bin/test -v addons-527950:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib: (2.04694108s)
	I1009 18:47:01.523699  304033 oci.go:107] Successfully prepared a docker volume addons-527950
	I1009 18:47:01.523720  304033 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:47:01.523740  304033 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:47:01.523809  304033 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-527950:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:47:05.581175  304033 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-527950:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.05728277s)
	I1009 18:47:05.581212  304033 kic.go:203] duration metric: took 4.057468754s to extract preloaded images to volume ...
	W1009 18:47:05.581352  304033 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 18:47:05.581464  304033 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:47:05.633858  304033 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-527950 --name addons-527950 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-527950 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-527950 --network addons-527950 --ip 192.168.49.2 --volume addons-527950:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1009 18:47:05.968005  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Running}}
	I1009 18:47:05.987768  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:06.015124  304033 cli_runner.go:164] Run: docker exec addons-527950 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:47:06.082190  304033 oci.go:144] the created container "addons-527950" has a running status.
	I1009 18:47:06.082218  304033 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa...
	I1009 18:47:06.614569  304033 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:47:06.649674  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:06.670077  304033 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:47:06.670102  304033 kic_runner.go:114] Args: [docker exec --privileged addons-527950 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:47:06.744206  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:06.768546  304033 machine.go:93] provisionDockerMachine start ...
	I1009 18:47:06.768651  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:06.788458  304033 main.go:141] libmachine: Using SSH client type: native
	I1009 18:47:06.789698  304033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1009 18:47:06.789716  304033 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:47:06.943346  304033 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-527950
	
	I1009 18:47:06.943413  304033 ubuntu.go:169] provisioning hostname "addons-527950"
	I1009 18:47:06.943516  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:06.966001  304033 main.go:141] libmachine: Using SSH client type: native
	I1009 18:47:06.966250  304033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1009 18:47:06.966271  304033 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-527950 && echo "addons-527950" | sudo tee /etc/hostname
	I1009 18:47:07.118527  304033 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-527950
	
	I1009 18:47:07.118612  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:07.134968  304033 main.go:141] libmachine: Using SSH client type: native
	I1009 18:47:07.135213  304033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1009 18:47:07.135239  304033 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-527950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-527950/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-527950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:47:07.263840  304033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:47:07.263870  304033 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19780-297764/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-297764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-297764/.minikube}
	I1009 18:47:07.263901  304033 ubuntu.go:177] setting up certificates
	I1009 18:47:07.263920  304033 provision.go:84] configureAuth start
	I1009 18:47:07.263987  304033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-527950
	I1009 18:47:07.280805  304033 provision.go:143] copyHostCerts
	I1009 18:47:07.280887  304033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-297764/.minikube/ca.pem (1078 bytes)
	I1009 18:47:07.281018  304033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-297764/.minikube/cert.pem (1123 bytes)
	I1009 18:47:07.281074  304033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-297764/.minikube/key.pem (1675 bytes)
	I1009 18:47:07.281116  304033 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-297764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca-key.pem org=jenkins.addons-527950 san=[127.0.0.1 192.168.49.2 addons-527950 localhost minikube]
	I1009 18:47:07.505349  304033 provision.go:177] copyRemoteCerts
	I1009 18:47:07.505420  304033 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:47:07.505462  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:07.522143  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:07.616808  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:47:07.642666  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 18:47:07.670787  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:47:07.695667  304033 provision.go:87] duration metric: took 431.722087ms to configureAuth
	I1009 18:47:07.695694  304033 ubuntu.go:193] setting minikube options for container-runtime
	I1009 18:47:07.695902  304033 config.go:182] Loaded profile config "addons-527950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:47:07.696007  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:07.712759  304033 main.go:141] libmachine: Using SSH client type: native
	I1009 18:47:07.713002  304033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1009 18:47:07.713023  304033 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:47:07.940948  304033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:47:07.940973  304033 machine.go:96] duration metric: took 1.172408163s to provisionDockerMachine
	I1009 18:47:07.940983  304033 client.go:171] duration metric: took 9.373164811s to LocalClient.Create
	I1009 18:47:07.941004  304033 start.go:167] duration metric: took 9.373235103s to libmachine.API.Create "addons-527950"
	I1009 18:47:07.941011  304033 start.go:293] postStartSetup for "addons-527950" (driver="docker")
	I1009 18:47:07.941028  304033 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:47:07.941100  304033 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:47:07.941147  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:07.963555  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:08.061537  304033 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:47:08.064996  304033 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:47:08.065031  304033 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 18:47:08.065042  304033 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 18:47:08.065050  304033 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1009 18:47:08.065062  304033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-297764/.minikube/addons for local assets ...
	I1009 18:47:08.065138  304033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-297764/.minikube/files for local assets ...
	I1009 18:47:08.065167  304033 start.go:296] duration metric: took 124.149363ms for postStartSetup
	I1009 18:47:08.065491  304033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-527950
	I1009 18:47:08.082273  304033 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/config.json ...
	I1009 18:47:08.082606  304033 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:47:08.082664  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:08.100526  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:08.188631  304033 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:47:08.193048  304033 start.go:128] duration metric: took 9.628410948s to createHost
	I1009 18:47:08.193072  304033 start.go:83] releasing machines lock for "addons-527950", held for 9.62854039s
	I1009 18:47:08.193145  304033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-527950
	I1009 18:47:08.210240  304033 ssh_runner.go:195] Run: cat /version.json
	I1009 18:47:08.210295  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:08.210311  304033 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:47:08.210383  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:08.231489  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:08.231668  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:08.319213  304033 ssh_runner.go:195] Run: systemctl --version
	I1009 18:47:08.458240  304033 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:47:08.598441  304033 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:47:08.602575  304033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:47:08.624818  304033 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1009 18:47:08.624957  304033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:47:08.657848  304033 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 18:47:08.657914  304033 start.go:495] detecting cgroup driver to use...
	I1009 18:47:08.657962  304033 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 18:47:08.658030  304033 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:47:08.674220  304033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:47:08.686090  304033 docker.go:217] disabling cri-docker service (if available) ...
	I1009 18:47:08.686157  304033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:47:08.700605  304033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:47:08.715393  304033 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:47:08.805782  304033 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:47:08.893036  304033 docker.go:233] disabling docker service ...
	I1009 18:47:08.893103  304033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:47:08.914155  304033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:47:08.925184  304033 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:47:09.013325  304033 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:47:09.106091  304033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:47:09.118765  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:47:09.135628  304033 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 18:47:09.135701  304033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.145458  304033 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:47:09.145526  304033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.155100  304033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.164516  304033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.173943  304033 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:47:09.183027  304033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.192532  304033 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.208097  304033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.217707  304033 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:47:09.226046  304033 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:47:09.234429  304033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:47:09.317751  304033 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:47:09.427421  304033 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:47:09.427502  304033 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:47:09.431177  304033 start.go:563] Will wait 60s for crictl version
	I1009 18:47:09.431241  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:47:09.434626  304033 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:47:09.469924  304033 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1009 18:47:09.470036  304033 ssh_runner.go:195] Run: crio --version
	I1009 18:47:09.508234  304033 ssh_runner.go:195] Run: crio --version
	I1009 18:47:09.548641  304033 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1009 18:47:09.550467  304033 cli_runner.go:164] Run: docker network inspect addons-527950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:47:09.565652  304033 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:47:09.569090  304033 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:47:09.579550  304033 kubeadm.go:883] updating cluster {Name:addons-527950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-527950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:47:09.579667  304033 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:47:09.579728  304033 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:47:09.659380  304033 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:47:09.659407  304033 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:47:09.659463  304033 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:47:09.695675  304033 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:47:09.695698  304033 cache_images.go:84] Images are preloaded, skipping loading
	I1009 18:47:09.695707  304033 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1009 18:47:09.695806  304033 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-527950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-527950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:47:09.695909  304033 ssh_runner.go:195] Run: crio config
	I1009 18:47:09.742122  304033 cni.go:84] Creating CNI manager for ""
	I1009 18:47:09.742148  304033 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:47:09.742162  304033 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 18:47:09.742185  304033 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-527950 NodeName:addons-527950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:47:09.742333  304033 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-527950"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:47:09.742408  304033 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 18:47:09.751565  304033 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 18:47:09.751659  304033 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:47:09.760598  304033 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 18:47:09.779421  304033 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:47:09.797672  304033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1009 18:47:09.815460  304033 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:47:09.818703  304033 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:47:09.829474  304033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:47:09.916676  304033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:47:09.931059  304033 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950 for IP: 192.168.49.2
	I1009 18:47:09.931135  304033 certs.go:194] generating shared ca certs ...
	I1009 18:47:09.931172  304033 certs.go:226] acquiring lock for ca certs: {Name:mk418a701df590b3680a6c2f2b51a4efe8f18158 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:09.931353  304033 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-297764/.minikube/ca.key
	I1009 18:47:10.138892  304033 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-297764/.minikube/ca.crt ...
	I1009 18:47:10.138928  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/ca.crt: {Name:mka573a2390739d804ee8d59f4a43e86b90264a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:10.139596  304033 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-297764/.minikube/ca.key ...
	I1009 18:47:10.139618  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/ca.key: {Name:mk7da466fe49415e1687db949c3a1f708289c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:10.139760  304033 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.key
	I1009 18:47:11.164386  304033 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.crt ...
	I1009 18:47:11.164417  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.crt: {Name:mkec4d80a4ab0fb9ee287b7e7a4f7ac45a446127 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:11.164608  304033 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.key ...
	I1009 18:47:11.164621  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.key: {Name:mke0ef870817ab2bcee921a0ff5cb39c33e6eef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:11.164706  304033 certs.go:256] generating profile certs ...
	I1009 18:47:11.164762  304033 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.key
	I1009 18:47:11.164787  304033 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt with IP's: []
	I1009 18:47:11.436858  304033 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt ...
	I1009 18:47:11.436897  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: {Name:mkbcb7ac22e740f2304a6be4c2633cb0af076ea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:11.437140  304033 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.key ...
	I1009 18:47:11.437159  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.key: {Name:mk67eac7ee05a3a1a7a6380796dbbe334f8625f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:11.437249  304033 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.key.686f537e
	I1009 18:47:11.437270  304033 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.crt.686f537e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:47:11.662461  304033 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.crt.686f537e ...
	I1009 18:47:11.662491  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.crt.686f537e: {Name:mkf0f88aa72f17d15a6129e5ca54a443493db4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:11.662694  304033 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.key.686f537e ...
	I1009 18:47:11.662715  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.key.686f537e: {Name:mk3577c4c7a4f5fac0afb1a3e6e7d40d0beb502b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:11.662802  304033 certs.go:381] copying /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.crt.686f537e -> /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.crt
	I1009 18:47:11.662923  304033 certs.go:385] copying /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.key.686f537e -> /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.key
	I1009 18:47:11.663053  304033 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.key
	I1009 18:47:11.663085  304033 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.crt with IP's: []
	I1009 18:47:12.389711  304033 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.crt ...
	I1009 18:47:12.389747  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.crt: {Name:mkd26c072f0e9883918450a800bb3a8b4f91aa9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:12.389939  304033 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.key ...
	I1009 18:47:12.389953  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.key: {Name:mk83abffff6bed48819c60b9bcb07a45468162ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:12.390146  304033 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:47:12.390194  304033 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:47:12.390221  304033 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:47:12.390251  304033 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/key.pem (1675 bytes)
	I1009 18:47:12.390850  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:47:12.416462  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 18:47:12.441202  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:47:12.465154  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:47:12.490047  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 18:47:12.515975  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:47:12.540565  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:47:12.565887  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:47:12.591023  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:47:12.614949  304033 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:47:12.633234  304033 ssh_runner.go:195] Run: openssl version
	I1009 18:47:12.638621  304033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:47:12.647993  304033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:47:12.651338  304033 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:47 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:47:12.651402  304033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:47:12.658247  304033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:47:12.667592  304033 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:47:12.670772  304033 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:47:12.670859  304033 kubeadm.go:392] StartCluster: {Name:addons-527950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-527950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:47:12.670941  304033 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:47:12.671004  304033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:47:12.707540  304033 cri.go:89] found id: ""
	I1009 18:47:12.707664  304033 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:47:12.716580  304033 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:47:12.725484  304033 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:47:12.725561  304033 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:47:12.734565  304033 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:47:12.734587  304033 kubeadm.go:157] found existing configuration files:
	
	I1009 18:47:12.734657  304033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:47:12.743660  304033 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:47:12.743730  304033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:47:12.751953  304033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:47:12.761032  304033 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:47:12.761098  304033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:47:12.769308  304033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:47:12.777927  304033 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:47:12.778022  304033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:47:12.786608  304033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:47:12.795604  304033 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:47:12.795702  304033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:47:12.804210  304033 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:47:12.844186  304033 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 18:47:12.844508  304033 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 18:47:12.863957  304033 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:47:12.864059  304033 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1009 18:47:12.864127  304033 kubeadm.go:310] OS: Linux
	I1009 18:47:12.864201  304033 kubeadm.go:310] CGROUPS_CPU: enabled
	I1009 18:47:12.864275  304033 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1009 18:47:12.864344  304033 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1009 18:47:12.864416  304033 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1009 18:47:12.864482  304033 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1009 18:47:12.864554  304033 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1009 18:47:12.864620  304033 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1009 18:47:12.864684  304033 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1009 18:47:12.864751  304033 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1009 18:47:12.925312  304033 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:47:12.925455  304033 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:47:12.926070  304033 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:47:12.932503  304033 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:47:12.937401  304033 out.go:235]   - Generating certificates and keys ...
	I1009 18:47:12.937609  304033 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 18:47:12.937714  304033 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 18:47:13.481383  304033 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:47:14.392000  304033 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:47:15.133416  304033 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:47:15.917613  304033 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 18:47:16.312386  304033 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 18:47:16.312676  304033 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-527950 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:47:16.944460  304033 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 18:47:16.944622  304033 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-527950 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:47:17.709146  304033 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:47:18.371425  304033 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:47:18.857265  304033 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 18:47:18.857518  304033 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:47:19.166994  304033 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:47:19.841743  304033 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:47:20.047368  304033 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:47:20.554711  304033 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:47:20.833789  304033 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:47:20.834499  304033 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:47:20.837485  304033 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:47:20.839755  304033 out.go:235]   - Booting up control plane ...
	I1009 18:47:20.839880  304033 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:47:20.839963  304033 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:47:20.841068  304033 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:47:20.851332  304033 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:47:20.857713  304033 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:47:20.857767  304033 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 18:47:20.950657  304033 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:47:20.950786  304033 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:47:22.953242  304033 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.002484857s
	I1009 18:47:22.953336  304033 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 18:47:29.454607  304033 kubeadm.go:310] [api-check] The API server is healthy after 6.50176464s
	I1009 18:47:29.473613  304033 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 18:47:29.492004  304033 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 18:47:29.516257  304033 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 18:47:29.516457  304033 kubeadm.go:310] [mark-control-plane] Marking the node addons-527950 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 18:47:29.529069  304033 kubeadm.go:310] [bootstrap-token] Using token: 4sfs59.wjhnvipwche2c0ya
	I1009 18:47:29.532782  304033 out.go:235]   - Configuring RBAC rules ...
	I1009 18:47:29.532924  304033 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 18:47:29.537002  304033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 18:47:29.547550  304033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 18:47:29.551787  304033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 18:47:29.557519  304033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 18:47:29.562768  304033 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 18:47:29.861682  304033 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 18:47:30.343911  304033 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 18:47:30.860793  304033 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 18:47:30.864121  304033 kubeadm.go:310] 
	I1009 18:47:30.864209  304033 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 18:47:30.864222  304033 kubeadm.go:310] 
	I1009 18:47:30.864308  304033 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 18:47:30.864316  304033 kubeadm.go:310] 
	I1009 18:47:30.864342  304033 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 18:47:30.864400  304033 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 18:47:30.864459  304033 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 18:47:30.864464  304033 kubeadm.go:310] 
	I1009 18:47:30.864519  304033 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 18:47:30.864528  304033 kubeadm.go:310] 
	I1009 18:47:30.864575  304033 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 18:47:30.864583  304033 kubeadm.go:310] 
	I1009 18:47:30.864635  304033 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 18:47:30.864714  304033 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 18:47:30.864785  304033 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 18:47:30.864793  304033 kubeadm.go:310] 
	I1009 18:47:30.864877  304033 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 18:47:30.864956  304033 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 18:47:30.864964  304033 kubeadm.go:310] 
	I1009 18:47:30.865048  304033 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4sfs59.wjhnvipwche2c0ya \
	I1009 18:47:30.865153  304033 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33c056a8235ede20aa813560942c562368f7b9dea0d47cbab7f3fe3a61439fce \
	I1009 18:47:30.865176  304033 kubeadm.go:310] 	--control-plane 
	I1009 18:47:30.865185  304033 kubeadm.go:310] 
	I1009 18:47:30.865270  304033 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 18:47:30.865278  304033 kubeadm.go:310] 
	I1009 18:47:30.865359  304033 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4sfs59.wjhnvipwche2c0ya \
	I1009 18:47:30.865464  304033 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33c056a8235ede20aa813560942c562368f7b9dea0d47cbab7f3fe3a61439fce 
	I1009 18:47:30.867225  304033 kubeadm.go:310] W1009 18:47:12.840620    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:47:30.867578  304033 kubeadm.go:310] W1009 18:47:12.841651    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:47:30.867903  304033 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1009 18:47:30.868043  304033 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:47:30.868079  304033 cni.go:84] Creating CNI manager for ""
	I1009 18:47:30.868091  304033 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:47:30.870211  304033 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 18:47:30.872572  304033 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 18:47:30.876374  304033 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1009 18:47:30.876395  304033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 18:47:30.895222  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 18:47:31.173339  304033 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 18:47:31.173498  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:31.173600  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-527950 minikube.k8s.io/updated_at=2024_10_09T18_47_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=addons-527950 minikube.k8s.io/primary=true
	I1009 18:47:31.300200  304033 ops.go:34] apiserver oom_adj: -16
	I1009 18:47:31.300314  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:31.800940  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:32.300445  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:32.800428  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:33.301071  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:33.800468  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:34.300448  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:34.801021  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:35.300748  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:35.430120  304033 kubeadm.go:1113] duration metric: took 4.256673073s to wait for elevateKubeSystemPrivileges
	I1009 18:47:35.430152  304033 kubeadm.go:394] duration metric: took 22.759297617s to StartCluster
	I1009 18:47:35.430170  304033 settings.go:142] acquiring lock: {Name:mk94c15161ad7dabfbd54a7b84d6e9487d964391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:35.430286  304033 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-297764/kubeconfig
	I1009 18:47:35.430727  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/kubeconfig: {Name:mk805654b0a3d9c829b5d3a4422736c8bd907781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:35.430933  304033 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:47:35.431081  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 18:47:35.431337  304033 config.go:182] Loaded profile config "addons-527950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:47:35.431378  304033 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1009 18:47:35.431469  304033 addons.go:69] Setting yakd=true in profile "addons-527950"
	I1009 18:47:35.431488  304033 addons.go:234] Setting addon yakd=true in "addons-527950"
	I1009 18:47:35.431515  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.432042  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.432604  304033 addons.go:69] Setting metrics-server=true in profile "addons-527950"
	I1009 18:47:35.432627  304033 addons.go:234] Setting addon metrics-server=true in "addons-527950"
	I1009 18:47:35.432663  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.433105  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.437557  304033 addons.go:69] Setting cloud-spanner=true in profile "addons-527950"
	I1009 18:47:35.437601  304033 addons.go:234] Setting addon cloud-spanner=true in "addons-527950"
	I1009 18:47:35.437639  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.438276  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.438732  304033 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-527950"
	I1009 18:47:35.438783  304033 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-527950"
	I1009 18:47:35.438819  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.438962  304033 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-527950"
	I1009 18:47:35.439093  304033 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-527950"
	I1009 18:47:35.439145  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.439228  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.440259  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.445233  304033 addons.go:69] Setting default-storageclass=true in profile "addons-527950"
	I1009 18:47:35.445276  304033 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-527950"
	I1009 18:47:35.445648  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.455926  304033 addons.go:69] Setting registry=true in profile "addons-527950"
	I1009 18:47:35.455958  304033 addons.go:234] Setting addon registry=true in "addons-527950"
	I1009 18:47:35.455995  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.456021  304033 addons.go:69] Setting gcp-auth=true in profile "addons-527950"
	I1009 18:47:35.456052  304033 mustload.go:65] Loading cluster: addons-527950
	I1009 18:47:35.456221  304033 config.go:182] Loaded profile config "addons-527950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:47:35.456452  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.456455  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.460626  304033 addons.go:69] Setting ingress=true in profile "addons-527950"
	I1009 18:47:35.491970  304033 addons.go:234] Setting addon ingress=true in "addons-527950"
	I1009 18:47:35.492033  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.492553  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.460856  304033 addons.go:69] Setting ingress-dns=true in profile "addons-527950"
	I1009 18:47:35.507970  304033 addons.go:234] Setting addon ingress-dns=true in "addons-527950"
	I1009 18:47:35.508024  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.508498  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.460877  304033 addons.go:69] Setting inspektor-gadget=true in profile "addons-527950"
	I1009 18:47:35.535594  304033 addons.go:234] Setting addon inspektor-gadget=true in "addons-527950"
	I1009 18:47:35.535649  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.541439  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.547661  304033 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1009 18:47:35.460927  304033 out.go:177] * Verifying Kubernetes components...
	I1009 18:47:35.474497  304033 addons.go:69] Setting storage-provisioner=true in profile "addons-527950"
	I1009 18:47:35.549392  304033 addons.go:234] Setting addon storage-provisioner=true in "addons-527950"
	I1009 18:47:35.549437  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.474516  304033 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-527950"
	I1009 18:47:35.553791  304033 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-527950"
	I1009 18:47:35.554141  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.554636  304033 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1009 18:47:35.556852  304033 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 18:47:35.556916  304033 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 18:47:35.557012  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.474526  304033 addons.go:69] Setting volcano=true in profile "addons-527950"
	I1009 18:47:35.557284  304033 addons.go:234] Setting addon volcano=true in "addons-527950"
	I1009 18:47:35.557324  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.557771  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.474533  304033 addons.go:69] Setting volumesnapshots=true in profile "addons-527950"
	I1009 18:47:35.561582  304033 addons.go:234] Setting addon volumesnapshots=true in "addons-527950"
	I1009 18:47:35.561624  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.562120  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.577215  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.598840  304033 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1009 18:47:35.601445  304033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:47:35.603819  304033 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1009 18:47:35.603901  304033 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1009 18:47:35.603980  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.629042  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.652203  304033 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1009 18:47:35.652224  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 18:47:35.652287  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.655035  304033 addons.go:234] Setting addon default-storageclass=true in "addons-527950"
	I1009 18:47:35.655120  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.655655  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.699635  304033 out.go:177]   - Using image docker.io/registry:2.8.3
	I1009 18:47:35.707558  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 18:47:35.707665  304033 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	W1009 18:47:35.708249  304033 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1009 18:47:35.699811  304033 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1009 18:47:35.699928  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:35.748939  304033 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1009 18:47:35.749224  304033 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:47:35.749258  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 18:47:35.749351  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.760596  304033 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:35.762509  304033 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:47:35.762527  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1009 18:47:35.762594  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.767049  304033 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1009 18:47:35.767977  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 18:47:35.768987  304033 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 18:47:35.769736  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1009 18:47:35.769812  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.789059  304033 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:35.792105  304033 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:47:35.792132  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1009 18:47:35.792194  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.810324  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:35.811245  304033 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-527950"
	I1009 18:47:35.812199  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.812641  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.816111  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1009 18:47:35.816255  304033 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1009 18:47:35.816495  304033 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:47:35.816657  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 18:47:35.820213  304033 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:47:35.820233  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:47:35.820295  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.824768  304033 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1009 18:47:35.824800  304033 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1009 18:47:35.824879  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.830715  304033 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 18:47:35.830739  304033 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 18:47:35.830802  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.867965  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 18:47:35.873671  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 18:47:35.877495  304033 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:47:35.877513  304033 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:47:35.877572  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.878052  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:35.881756  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 18:47:35.883886  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:35.888781  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 18:47:35.893536  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 18:47:35.896899  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 18:47:35.900401  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 18:47:35.900426  304033 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 18:47:35.900502  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.963345  304033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:47:35.993783  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:35.993790  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.011792  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.031894  304033 out.go:177]   - Using image docker.io/busybox:stable
	I1009 18:47:36.032063  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.032923  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.037995  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.039874  304033 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 18:47:36.041584  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.043781  304033 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:47:36.043805  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 18:47:36.044367  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:36.052831  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.081159  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	W1009 18:47:36.082186  304033 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1009 18:47:36.082214  304033 retry.go:31] will retry after 222.285673ms: ssh: handshake failed: EOF
	I1009 18:47:36.199036  304033 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1009 18:47:36.199062  304033 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1009 18:47:36.208965  304033 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 18:47:36.209035  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 18:47:36.364361  304033 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1009 18:47:36.364426  304033 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1009 18:47:36.373759  304033 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 18:47:36.373826  304033 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 18:47:36.374020  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 18:47:36.427399  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:47:36.468037  304033 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 18:47:36.468104  304033 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 18:47:36.499014  304033 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1009 18:47:36.499087  304033 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1009 18:47:36.522134  304033 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:47:36.522214  304033 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 18:47:36.561569  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:47:36.574480  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:47:36.584795  304033 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1009 18:47:36.584867  304033 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1009 18:47:36.595897  304033 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:47:36.595966  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 18:47:36.599286  304033 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 18:47:36.599358  304033 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 18:47:36.625695  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:47:36.687502  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 18:47:36.687575  304033 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 18:47:36.699072  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:47:36.719366  304033 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1009 18:47:36.719445  304033 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1009 18:47:36.748332  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:47:36.764943  304033 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:47:36.765015  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1009 18:47:36.804387  304033 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 18:47:36.804467  304033 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 18:47:36.807429  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:47:36.823288  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:47:36.901413  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 18:47:36.901490  304033 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 18:47:36.940833  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:47:36.949854  304033 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1009 18:47:36.949935  304033 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1009 18:47:36.994069  304033 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 18:47:36.994141  304033 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 18:47:37.085150  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 18:47:37.085224  304033 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 18:47:37.109687  304033 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1009 18:47:37.109762  304033 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1009 18:47:37.176605  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 18:47:37.176680  304033 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 18:47:37.303942  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 18:47:37.303969  304033 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 18:47:37.312257  304033 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1009 18:47:37.312279  304033 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1009 18:47:37.407328  304033 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:37.407396  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 18:47:37.474437  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 18:47:37.474509  304033 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 18:47:37.482393  304033 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1009 18:47:37.482472  304033 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1009 18:47:37.542596  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:37.566059  304033 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 18:47:37.566132  304033 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1009 18:47:37.609938  304033 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:47:37.610061  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1009 18:47:37.619088  304033 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 18:47:37.619159  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 18:47:37.653242  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:47:37.716441  304033 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 18:47:37.716516  304033 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 18:47:37.811362  304033 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 18:47:37.811435  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 18:47:37.936380  304033 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.047554904s)
	I1009 18:47:37.936458  304033 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1009 18:47:37.936938  304033 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.973513522s)
	I1009 18:47:37.938829  304033 node_ready.go:35] waiting up to 6m0s for node "addons-527950" to be "Ready" ...
	I1009 18:47:37.952478  304033 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 18:47:37.952554  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 18:47:38.115304  304033 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:47:38.115383  304033 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 18:47:38.229987  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:47:39.146308  304033 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-527950" context rescaled to 1 replicas
	I1009 18:47:40.102649  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.728585261s)
	I1009 18:47:40.102724  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.675263753s)
	I1009 18:47:40.180829  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:42.457778  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:42.601832  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.97606695s)
	I1009 18:47:42.601897  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.902758069s)
	I1009 18:47:42.601954  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.853551581s)
	I1009 18:47:42.601965  304033 addons.go:475] Verifying addon metrics-server=true in "addons-527950"
	I1009 18:47:42.602009  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.794523923s)
	I1009 18:47:42.602221  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.778862677s)
	I1009 18:47:42.602237  304033 addons.go:475] Verifying addon registry=true in "addons-527950"
	I1009 18:47:42.601772  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.027206149s)
	I1009 18:47:42.602598  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.041004638s)
	I1009 18:47:42.602744  304033 addons.go:475] Verifying addon ingress=true in "addons-527950"
	I1009 18:47:42.602909  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.661994803s)
	I1009 18:47:42.604456  304033 out.go:177] * Verifying registry addon...
	I1009 18:47:42.604536  304033 out.go:177] * Verifying ingress addon...
	I1009 18:47:42.605567  304033 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-527950 service yakd-dashboard -n yakd-dashboard
	
	I1009 18:47:42.609434  304033 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 18:47:42.609600  304033 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 18:47:42.621308  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.967975579s)
	I1009 18:47:42.621365  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.078690089s)
	W1009 18:47:42.621393  304033 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:47:42.621415  304033 retry.go:31] will retry after 269.402403ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:47:42.626872  304033 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:47:42.626905  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:42.628010  304033 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 18:47:42.628028  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 18:47:42.651712  304033 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I1009 18:47:42.891579  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:42.960372  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.730287478s)
	I1009 18:47:42.960410  304033 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-527950"
	I1009 18:47:42.962711  304033 out.go:177] * Verifying csi-hostpath-driver addon...
	I1009 18:47:42.965763  304033 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 18:47:42.979267  304033 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:47:42.979292  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:43.139236  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:43.140236  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:43.470438  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:43.616870  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:43.618481  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:43.839059  304033 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 18:47:43.839173  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:43.856861  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:43.969698  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:43.992864  304033 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 18:47:44.018568  304033 addons.go:234] Setting addon gcp-auth=true in "addons-527950"
	I1009 18:47:44.018671  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:44.019158  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:44.037564  304033 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 18:47:44.037623  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:44.069543  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:44.113761  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:44.114874  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:44.470256  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:44.615372  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:44.616721  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:44.942531  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:44.970874  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:45.116004  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:45.116306  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:45.469442  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:45.614632  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:45.616676  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:45.974353  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:46.114015  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:46.115006  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:46.129856  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.238210804s)
	I1009 18:47:46.129919  304033 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.092337666s)
	I1009 18:47:46.132410  304033 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:46.134402  304033 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1009 18:47:46.136336  304033 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 18:47:46.136393  304033 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 18:47:46.172872  304033 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 18:47:46.172903  304033 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 18:47:46.202003  304033 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:47:46.202030  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1009 18:47:46.238775  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:47:46.473442  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:46.613636  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:46.614400  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:46.957671  304033 addons.go:475] Verifying addon gcp-auth=true in "addons-527950"
	I1009 18:47:46.959941  304033 out.go:177] * Verifying gcp-auth addon...
	I1009 18:47:46.962772  304033 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 18:47:46.977540  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:46.989545  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:46.990127  304033 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 18:47:46.990151  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:47.114479  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:47.115561  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:47.477865  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:47.478597  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:47.613231  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:47.614694  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:47.966672  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:47.969432  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:48.113365  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:48.114315  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:48.466814  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:48.469593  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:48.613729  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:48.614145  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:48.968566  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:48.970635  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:49.113442  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:49.114331  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:49.442728  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:49.466179  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:49.468907  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:49.613933  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:49.614285  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:49.967573  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:49.972595  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:50.113657  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:50.115138  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:50.466322  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:50.468962  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:50.612790  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:50.613783  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:50.967263  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:50.969774  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:51.114889  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:51.115247  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:51.442822  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:51.466666  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:51.468827  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:51.614136  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:51.614331  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:51.966997  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:51.970021  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:52.114659  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:52.115518  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:52.466814  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:52.469046  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:52.614462  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:52.614746  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:52.966528  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:52.970045  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:53.112865  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:53.113615  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:53.465945  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:53.468884  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:53.613759  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:53.614632  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:53.941902  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:53.967645  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:53.968981  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:54.114257  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:54.115106  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:54.467158  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:54.469299  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:54.614096  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:54.614927  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:54.967180  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:54.970110  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:55.114055  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:55.115130  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:55.466702  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:55.469774  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:55.613861  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:55.614708  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:55.942373  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:55.966511  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:55.970194  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:56.113906  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:56.114601  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:56.466046  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:56.468704  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:56.613347  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:56.614168  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:56.969257  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:56.971760  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:57.113796  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:57.114319  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:57.467387  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:57.469141  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:57.614928  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:57.615500  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:57.942714  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:57.965940  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:57.969600  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:58.113472  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:58.114275  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:58.465713  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:58.469119  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:58.613518  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:58.613841  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:58.965892  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:58.968750  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:59.113337  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:59.113850  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:59.466957  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:59.469562  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:59.613320  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:59.613978  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:59.942768  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:59.967569  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:59.971061  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:00.167947  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:00.168299  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:00.475238  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:00.477770  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:00.615765  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:00.620867  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:00.967264  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:00.970258  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:01.113930  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:01.115036  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:01.466741  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:01.468963  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:01.613896  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:01.614551  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:01.966250  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:01.969250  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:02.113195  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:02.114413  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:02.442717  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:02.465948  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:02.468968  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:02.613203  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:02.613923  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:02.967326  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:02.969572  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:03.113367  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:03.114327  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:03.466682  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:03.469080  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:03.613080  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:03.613915  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:03.966623  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:03.969605  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:04.113982  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:04.114887  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:04.466929  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:04.470224  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:04.614190  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:04.615130  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:04.942842  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:04.966423  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:04.969853  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:05.114104  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:05.114335  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:05.465693  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:05.469687  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:05.613413  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:05.614326  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:05.966861  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:05.970488  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:06.114240  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:06.115311  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:06.467075  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:06.470123  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:06.613196  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:06.614024  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:06.969182  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:06.971148  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:07.113815  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:07.114686  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:07.442752  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:07.467399  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:07.470071  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:07.613678  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:07.614430  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:07.967516  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:07.969900  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:08.114073  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:08.115136  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:08.466317  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:08.468616  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:08.613467  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:08.615109  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:08.968147  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:08.970309  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:09.114065  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:09.114825  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:09.466716  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:09.469304  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:09.615986  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:09.616958  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:09.942863  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:09.968057  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:09.969438  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:10.113870  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:10.114797  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:10.468148  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:10.469537  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:10.614109  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:10.614993  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:10.967157  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:10.969626  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:11.113810  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:11.114729  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:11.467523  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:11.469466  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:11.612898  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:11.614466  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:11.966755  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:11.969835  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:12.113427  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:12.114395  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:12.441855  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:12.466663  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:12.469544  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:12.613117  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:12.614171  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:12.966560  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:12.968940  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:13.112864  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:13.114070  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:13.465995  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:13.468812  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:13.613216  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:13.614390  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:13.967516  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:13.969698  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:14.113016  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:14.113858  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:14.442350  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:14.466082  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:14.469065  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:14.613566  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:14.615728  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:14.974253  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:14.975329  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:15.116202  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:15.117112  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:15.466829  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:15.468926  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:15.613714  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:15.614708  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:15.965993  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:15.968902  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:16.114033  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:16.114290  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:16.444200  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:16.465705  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:16.469299  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:16.613804  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:16.614653  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:16.968584  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:16.971528  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:17.113154  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:17.113955  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:17.466677  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:17.469587  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:17.613858  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:17.614601  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:17.967590  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:17.969598  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:18.113552  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:18.113793  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:18.466613  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:18.469301  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:18.613432  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:18.614029  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:18.942730  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:18.966919  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:18.969509  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:19.112858  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:19.113727  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:19.467124  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:19.469314  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:19.613988  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:19.615103  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:19.966223  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:19.969805  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:20.114050  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:20.115670  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:20.466334  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:20.468932  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:20.613720  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:20.614515  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:20.966996  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:20.970341  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:21.113529  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:21.124309  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:21.442365  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:21.466443  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:21.469282  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:21.615028  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:21.615288  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:21.967175  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:21.970617  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:22.123818  304033 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:48:22.123872  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:22.128301  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:22.496694  304033 node_ready.go:49] node "addons-527950" has status "Ready":"True"
	I1009 18:48:22.496721  304033 node_ready.go:38] duration metric: took 44.55782189s for node "addons-527950" to be "Ready" ...
	I1009 18:48:22.496733  304033 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:48:22.543655  304033 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:48:22.543681  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:22.544075  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:22.551643  304033 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6xlwc" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:22.642147  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:22.644009  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:22.971789  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:22.973411  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:23.114385  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:23.115616  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:23.466600  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:23.471264  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:23.618141  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:23.619694  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:23.967422  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:23.970910  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:24.122271  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:24.123549  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:24.467212  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:24.472441  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:24.558439  304033 pod_ready.go:103] pod "coredns-7c65d6cfc9-6xlwc" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:24.612945  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:24.615087  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:24.967854  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:24.970633  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:25.063133  304033 pod_ready.go:93] pod "coredns-7c65d6cfc9-6xlwc" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:25.063160  304033 pod_ready.go:82] duration metric: took 2.511477514s for pod "coredns-7c65d6cfc9-6xlwc" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.063187  304033 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.068741  304033 pod_ready.go:93] pod "etcd-addons-527950" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:25.068768  304033 pod_ready.go:82] duration metric: took 5.571262ms for pod "etcd-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.068785  304033 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.075151  304033 pod_ready.go:93] pod "kube-apiserver-addons-527950" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:25.075181  304033 pod_ready.go:82] duration metric: took 6.386404ms for pod "kube-apiserver-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.075195  304033 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.081391  304033 pod_ready.go:93] pod "kube-controller-manager-addons-527950" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:25.081416  304033 pod_ready.go:82] duration metric: took 6.213427ms for pod "kube-controller-manager-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.081432  304033 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ffxxn" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.087591  304033 pod_ready.go:93] pod "kube-proxy-ffxxn" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:25.087621  304033 pod_ready.go:82] duration metric: took 6.159856ms for pod "kube-proxy-ffxxn" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.087635  304033 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.114767  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:25.115893  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:25.456015  304033 pod_ready.go:93] pod "kube-scheduler-addons-527950" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:25.456088  304033 pod_ready.go:82] duration metric: took 368.414383ms for pod "kube-scheduler-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.456116  304033 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.468590  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:25.478207  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:25.616983  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:25.619476  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:25.967046  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:25.972924  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:26.115966  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:26.118456  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:26.466523  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:26.473072  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:26.614761  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:26.615252  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:26.976855  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:26.980162  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:27.114736  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:27.115720  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:27.462867  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:27.465819  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:27.470197  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:27.614653  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:27.615507  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:27.968185  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:27.972217  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:28.113956  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:28.114401  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:28.473304  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:28.475807  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:28.615159  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:28.617957  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:28.968100  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:28.978744  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:29.116145  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:29.117082  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:29.463567  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:29.466190  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:29.470341  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:29.613752  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:29.614563  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:29.967968  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:29.970839  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:30.115332  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:30.116714  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:30.471895  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:30.473940  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:30.616029  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:30.616977  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:30.967444  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:30.973307  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:31.116434  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:31.118030  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:31.466384  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:31.470254  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:31.615148  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:31.615662  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:31.962817  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:31.966215  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:31.970714  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:32.114189  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:32.115434  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:32.466167  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:32.470171  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:32.613370  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:32.619230  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:32.966232  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:32.971104  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:33.135767  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:33.136942  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:33.466619  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:33.470194  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:33.614376  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:33.614871  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:33.966762  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:33.975513  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:33.982416  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:34.116387  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:34.118234  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:34.467556  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:34.477536  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:34.616063  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:34.616975  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:34.971940  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:34.977144  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:35.123857  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:35.124361  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:35.466001  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:35.476450  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:35.618428  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:35.620017  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:35.973771  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:35.975509  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:35.976914  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:36.114049  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:36.115166  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:36.468297  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:36.476558  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:36.626259  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:36.626951  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:36.982473  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:36.985420  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:37.116064  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:37.117439  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:37.476216  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:37.479493  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:37.615657  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:37.616680  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:37.978555  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:37.981639  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:38.116509  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:38.120632  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:38.464270  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:38.470039  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:38.473285  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:38.619883  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:38.621187  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:38.967218  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:38.972283  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:39.114823  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:39.116199  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:39.466190  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:39.469860  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:39.613818  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:39.615031  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:39.965790  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:39.975662  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:40.118383  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:40.120679  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:40.472028  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:40.478162  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:40.618451  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:40.621474  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:40.966048  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:40.968112  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:40.972494  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:41.115797  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:41.116788  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:41.467477  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:41.471388  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:41.616084  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:41.618090  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:41.972676  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:41.977405  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:42.115596  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:42.117391  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:42.501410  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:42.504228  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:42.622124  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:42.623663  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:42.973225  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:42.974679  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:43.122560  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:43.124442  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:43.463083  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:43.466647  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:43.480649  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:43.615745  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:43.616815  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:43.968031  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:43.971215  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:44.114166  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:44.115373  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:44.466216  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:44.470464  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:44.613187  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:44.614854  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:44.966284  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:44.971705  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:45.114701  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:45.118863  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:45.463320  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:45.485187  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:45.488022  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:45.614156  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:45.617767  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:45.970260  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:45.973832  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:46.120375  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:46.121351  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:46.467133  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:46.470289  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:46.615136  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:46.616049  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:46.971794  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:46.973136  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:47.113858  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:47.115074  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:47.466024  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:47.469942  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:47.614041  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:47.615594  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:47.963480  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:47.966086  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:47.970697  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:48.115534  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:48.116236  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:48.479231  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:48.481167  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:48.616081  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:48.616640  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:48.978543  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:48.980247  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:49.115752  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:49.117215  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:49.470349  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:49.473745  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:49.638445  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:49.641123  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:49.973937  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:49.998462  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:49.999398  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:50.127646  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:50.128641  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:50.477353  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:50.480643  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:50.620936  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:50.621785  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:50.990362  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:50.992213  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:51.137550  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:51.139750  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:51.480889  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:51.482849  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:51.620294  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:51.621124  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:51.983315  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:51.983734  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:51.985674  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:52.121203  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:52.122227  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:52.488190  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:52.488696  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:52.621348  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:52.622397  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:52.973489  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:52.976280  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:53.114505  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:53.115969  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:53.466643  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:53.470645  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:53.615894  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:53.616144  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:53.967476  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:53.972230  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:54.115365  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:54.115957  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:54.462342  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:54.466182  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:54.470124  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:54.613994  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:54.616086  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:54.965850  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:54.970390  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:55.113821  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:55.114931  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:55.469185  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:55.471083  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:55.613770  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:55.615440  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:55.972049  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:55.976515  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:56.116298  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:56.117385  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:56.464928  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:56.471280  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:56.475782  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:56.616184  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:56.617032  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:56.982278  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:56.983170  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:57.116068  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:57.117641  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:57.497978  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:57.504965  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:57.625883  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:57.627448  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:57.969102  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:57.984271  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:58.115870  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:58.117588  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:58.467741  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:58.474789  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:58.617964  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:58.618952  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:58.965836  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:58.976785  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:58.981718  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:59.116153  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:59.117305  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:59.474720  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:59.478028  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:59.615662  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:59.617257  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:59.980876  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:59.989080  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:00.126005  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:00.142328  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:00.470468  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:00.473293  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:00.616025  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:00.617634  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:00.968026  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:00.971990  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:00.973011  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:01.121647  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:01.123101  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:01.467497  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:01.475416  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:01.618173  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:01.620980  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:01.965661  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:01.982918  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:02.114277  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:02.115325  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:02.469769  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:02.472093  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:02.615019  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:02.615374  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:02.967129  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:02.970216  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:03.114220  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:03.115908  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:03.462865  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:03.466237  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:03.471018  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:03.622491  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:03.629655  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:03.972780  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:03.975673  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:04.117171  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:04.117561  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:04.472146  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:04.474073  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:04.614433  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:04.615674  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:04.972086  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:04.974217  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:05.116010  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:05.116503  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:05.469391  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:05.471296  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:05.614005  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:05.614350  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:05.963099  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:05.965961  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:05.971255  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:06.114343  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:06.115680  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:06.466353  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:06.470068  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:06.615652  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:06.617163  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:06.972561  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:06.974379  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:07.114592  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:07.115275  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:07.492753  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:07.496941  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:07.618481  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:07.618849  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:07.966317  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:07.972605  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:07.977117  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:08.129933  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:08.130767  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:08.470211  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:08.472378  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:08.615561  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:08.616860  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:08.977309  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:08.987299  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:09.117418  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:09.119248  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:09.466130  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:09.470234  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:09.615370  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:09.616323  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:09.970548  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:09.974588  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:09.978156  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:10.149650  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:10.162144  304033 kapi.go:107] duration metric: took 1m27.552529397s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 18:49:10.466728  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:10.470334  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:10.623166  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:10.972282  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:10.972717  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:11.122257  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:11.466486  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:11.470590  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:11.613870  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:11.968700  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:11.972587  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:12.115452  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:12.462544  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:12.466598  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:12.470000  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:12.614574  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:12.974719  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:12.976647  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:13.114622  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:13.467786  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:13.471589  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:13.614735  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:13.967377  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:13.970691  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:14.113756  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:14.466102  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:14.469952  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:14.614393  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:14.962649  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:14.966392  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:14.970430  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:15.117792  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:15.483272  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:15.483761  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:15.618147  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:15.972344  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:15.975980  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:16.118624  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:16.466150  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:16.470295  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:16.614735  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:16.967547  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:16.973534  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:17.114521  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:17.463578  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:17.466495  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:17.470381  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:17.613540  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:17.967749  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:17.971121  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:18.114855  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:18.468498  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:18.472841  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:18.614440  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:18.965755  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:18.970182  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:19.114544  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:19.466601  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:19.470517  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:19.613787  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:19.971116  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:19.980100  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:19.981252  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:20.116225  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:20.476972  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:20.488967  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:20.614027  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:20.974662  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:20.980039  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:21.113935  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:21.468347  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:21.475280  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:21.614197  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:21.968400  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:21.972113  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:22.114431  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:22.462898  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:22.466091  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:22.470570  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:22.614986  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:22.975693  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:22.976806  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:23.114722  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:23.471253  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:23.472928  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:23.613729  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:23.965692  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:23.970459  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:24.114498  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:24.466233  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:24.470198  304033 kapi.go:107] duration metric: took 1m41.504431301s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 18:49:24.613824  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:24.962593  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:24.968152  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:25.114411  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:25.467189  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:25.614302  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:25.967167  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:26.114354  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:26.467106  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:26.615126  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:26.966084  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:26.970775  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:27.114892  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:27.466076  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:27.614055  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:27.973842  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:28.117355  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:28.466671  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:28.616546  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:28.972899  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:29.114866  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:29.462426  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:29.468636  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:29.614527  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:29.971817  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:30.115470  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:30.466964  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:30.613745  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:30.966303  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:31.114688  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:31.474490  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:31.476341  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:31.615551  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:31.968292  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:32.114165  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:32.470183  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:32.614946  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:32.968583  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:33.115306  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:33.487810  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:33.488665  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:33.615421  304033 kapi.go:107] duration metric: took 1m51.00598366s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 18:49:33.966423  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:34.466344  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:34.966599  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:35.468430  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:35.974437  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:35.985140  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:36.466139  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:36.977106  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:37.468165  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:37.968110  304033 kapi.go:107] duration metric: took 1m51.005343288s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 18:49:37.970499  304033 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-527950 cluster.
	I1009 18:49:37.972276  304033 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 18:49:37.973863  304033 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 18:49:37.975623  304033 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1009 18:49:37.977605  304033 addons.go:510] duration metric: took 2m2.546216298s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner metrics-server yakd inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1009 18:49:38.464717  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:40.962940  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:43.462949  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:45.463144  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:47.962131  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:49.962252  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:51.963067  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:54.462586  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:56.462815  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:57.469272  304033 pod_ready.go:93] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"True"
	I1009 18:49:57.469298  304033 pod_ready.go:82] duration metric: took 1m32.013161291s for pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace to be "Ready" ...
	I1009 18:49:57.469312  304033 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-frbq8" in "kube-system" namespace to be "Ready" ...
	I1009 18:49:57.474879  304033 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-frbq8" in "kube-system" namespace has status "Ready":"True"
	I1009 18:49:57.474905  304033 pod_ready.go:82] duration metric: took 5.585212ms for pod "nvidia-device-plugin-daemonset-frbq8" in "kube-system" namespace to be "Ready" ...
	I1009 18:49:57.474927  304033 pod_ready.go:39] duration metric: took 1m34.978144286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:49:57.474942  304033 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:49:57.474972  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:49:57.475034  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:49:57.526169  304033 cri.go:89] found id: "4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06"
	I1009 18:49:57.526247  304033 cri.go:89] found id: ""
	I1009 18:49:57.526262  304033 logs.go:282] 1 containers: [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06]
	I1009 18:49:57.526326  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.529983  304033 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:49:57.530094  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:49:57.571108  304033 cri.go:89] found id: "0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987"
	I1009 18:49:57.571201  304033 cri.go:89] found id: ""
	I1009 18:49:57.571226  304033 logs.go:282] 1 containers: [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987]
	I1009 18:49:57.571318  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.575953  304033 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:49:57.576043  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:49:57.613862  304033 cri.go:89] found id: "a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6"
	I1009 18:49:57.613884  304033 cri.go:89] found id: ""
	I1009 18:49:57.613893  304033 logs.go:282] 1 containers: [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6]
	I1009 18:49:57.613949  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.617666  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:49:57.617739  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:49:57.664132  304033 cri.go:89] found id: "5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f"
	I1009 18:49:57.664161  304033 cri.go:89] found id: ""
	I1009 18:49:57.664170  304033 logs.go:282] 1 containers: [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f]
	I1009 18:49:57.664231  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.668036  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:49:57.668109  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:49:57.716265  304033 cri.go:89] found id: "b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4"
	I1009 18:49:57.716346  304033 cri.go:89] found id: ""
	I1009 18:49:57.716362  304033 logs.go:282] 1 containers: [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4]
	I1009 18:49:57.716421  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.720220  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:49:57.720295  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:49:57.765756  304033 cri.go:89] found id: "d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6"
	I1009 18:49:57.765778  304033 cri.go:89] found id: ""
	I1009 18:49:57.765788  304033 logs.go:282] 1 containers: [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6]
	I1009 18:49:57.765847  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.769430  304033 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:49:57.769554  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:49:57.808879  304033 cri.go:89] found id: "658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4"
	I1009 18:49:57.808902  304033 cri.go:89] found id: ""
	I1009 18:49:57.808910  304033 logs.go:282] 1 containers: [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4]
	I1009 18:49:57.808967  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.812398  304033 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:49:57.812425  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 18:49:58.016603  304033 logs.go:123] Gathering logs for etcd [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987] ...
	I1009 18:49:58.016637  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987"
	I1009 18:49:58.068774  304033 logs.go:123] Gathering logs for coredns [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6] ...
	I1009 18:49:58.068812  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6"
	I1009 18:49:58.119632  304033 logs.go:123] Gathering logs for kube-proxy [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4] ...
	I1009 18:49:58.119742  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4"
	I1009 18:49:58.161474  304033 logs.go:123] Gathering logs for kubelet ...
	I1009 18:49:58.161551  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 18:49:58.233492  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.018098    1507 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-527950' and this object
	W1009 18:49:58.233735  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.018147    1507 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:49:58.235601  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.075946    1507 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-527950' and this object
	W1009 18:49:58.235817  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.076004    1507 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:49:58.236010  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091197    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:49:58.236237  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091247    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:49:58.236423  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091297    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:49:58.236659  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091310    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	I1009 18:49:58.276246  304033 logs.go:123] Gathering logs for kube-apiserver [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06] ...
	I1009 18:49:58.276283  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06"
	I1009 18:49:58.350538  304033 logs.go:123] Gathering logs for kube-scheduler [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f] ...
	I1009 18:49:58.350572  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f"
	I1009 18:49:58.406979  304033 logs.go:123] Gathering logs for kube-controller-manager [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6] ...
	I1009 18:49:58.407011  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6"
	I1009 18:49:58.495065  304033 logs.go:123] Gathering logs for kindnet [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4] ...
	I1009 18:49:58.495100  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4"
	I1009 18:49:58.535687  304033 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:49:58.535717  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:49:58.630369  304033 logs.go:123] Gathering logs for container status ...
	I1009 18:49:58.630410  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:49:58.684190  304033 logs.go:123] Gathering logs for dmesg ...
	I1009 18:49:58.684219  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:49:58.700990  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:49:58.701015  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 18:49:58.701064  304033 out.go:270] X Problems detected in kubelet:
	W1009 18:49:58.701080  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.076004    1507 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:49:58.701088  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091197    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:49:58.701100  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091247    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:49:58.701108  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091297    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:49:58.701119  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091310    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	I1009 18:49:58.701126  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:49:58.701136  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:50:08.702790  304033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:50:08.718870  304033 api_server.go:72] duration metric: took 2m33.287899548s to wait for apiserver process to appear ...
	I1009 18:50:08.718903  304033 api_server.go:88] waiting for apiserver healthz status ...
	I1009 18:50:08.718942  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:50:08.719007  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:50:08.767775  304033 cri.go:89] found id: "4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06"
	I1009 18:50:08.767800  304033 cri.go:89] found id: ""
	I1009 18:50:08.767808  304033 logs.go:282] 1 containers: [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06]
	I1009 18:50:08.767889  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:08.771495  304033 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:50:08.771568  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:50:08.809558  304033 cri.go:89] found id: "0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987"
	I1009 18:50:08.809583  304033 cri.go:89] found id: ""
	I1009 18:50:08.809592  304033 logs.go:282] 1 containers: [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987]
	I1009 18:50:08.809650  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:08.813279  304033 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:50:08.813351  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:50:08.859789  304033 cri.go:89] found id: "a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6"
	I1009 18:50:08.859813  304033 cri.go:89] found id: ""
	I1009 18:50:08.859859  304033 logs.go:282] 1 containers: [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6]
	I1009 18:50:08.859920  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:08.863994  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:50:08.864072  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:50:08.908759  304033 cri.go:89] found id: "5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f"
	I1009 18:50:08.908784  304033 cri.go:89] found id: ""
	I1009 18:50:08.908794  304033 logs.go:282] 1 containers: [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f]
	I1009 18:50:08.908882  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:08.913541  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:50:08.913644  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:50:08.954544  304033 cri.go:89] found id: "b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4"
	I1009 18:50:08.954574  304033 cri.go:89] found id: ""
	I1009 18:50:08.954583  304033 logs.go:282] 1 containers: [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4]
	I1009 18:50:08.954642  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:08.958369  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:50:08.958456  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:50:08.999412  304033 cri.go:89] found id: "d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6"
	I1009 18:50:08.999434  304033 cri.go:89] found id: ""
	I1009 18:50:08.999444  304033 logs.go:282] 1 containers: [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6]
	I1009 18:50:08.999503  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:09.003616  304033 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:50:09.003701  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:50:09.052290  304033 cri.go:89] found id: "658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4"
	I1009 18:50:09.052370  304033 cri.go:89] found id: ""
	I1009 18:50:09.052387  304033 logs.go:282] 1 containers: [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4]
	I1009 18:50:09.052455  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:09.056781  304033 logs.go:123] Gathering logs for kindnet [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4] ...
	I1009 18:50:09.056807  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4"
	I1009 18:50:09.100616  304033 logs.go:123] Gathering logs for container status ...
	I1009 18:50:09.100654  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:50:09.164159  304033 logs.go:123] Gathering logs for kube-apiserver [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06] ...
	I1009 18:50:09.164190  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06"
	I1009 18:50:09.249227  304033 logs.go:123] Gathering logs for kube-scheduler [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f] ...
	I1009 18:50:09.249267  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f"
	I1009 18:50:09.301737  304033 logs.go:123] Gathering logs for kube-proxy [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4] ...
	I1009 18:50:09.301772  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4"
	I1009 18:50:09.341270  304033 logs.go:123] Gathering logs for kube-controller-manager [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6] ...
	I1009 18:50:09.341299  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6"
	I1009 18:50:09.407659  304033 logs.go:123] Gathering logs for coredns [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6] ...
	I1009 18:50:09.407696  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6"
	I1009 18:50:09.449189  304033 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:50:09.449229  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:50:09.548598  304033 logs.go:123] Gathering logs for kubelet ...
	I1009 18:50:09.548639  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 18:50:09.620389  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.018098    1507 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-527950' and this object
	W1009 18:50:09.620655  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.018147    1507 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:09.622474  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.075946    1507 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-527950' and this object
	W1009 18:50:09.622684  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.076004    1507 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:09.622869  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091197    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:09.623096  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091247    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:09.623283  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091297    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:09.623511  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091310    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	I1009 18:50:09.663532  304033 logs.go:123] Gathering logs for dmesg ...
	I1009 18:50:09.663558  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:50:09.680249  304033 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:50:09.680279  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 18:50:09.824449  304033 logs.go:123] Gathering logs for etcd [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987] ...
	I1009 18:50:09.824479  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987"
	I1009 18:50:09.871748  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:50:09.871776  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 18:50:09.871877  304033 out.go:270] X Problems detected in kubelet:
	W1009 18:50:09.871895  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.076004    1507 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:09.871917  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091197    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:09.871931  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091247    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:09.871946  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091297    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:09.871961  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091310    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	I1009 18:50:09.871968  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:50:09.871979  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:50:19.873265  304033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 18:50:19.881156  304033 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 18:50:19.882175  304033 api_server.go:141] control plane version: v1.31.1
	I1009 18:50:19.882199  304033 api_server.go:131] duration metric: took 11.163288571s to wait for apiserver health ...
	I1009 18:50:19.882208  304033 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 18:50:19.882230  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:50:19.882293  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:50:19.926772  304033 cri.go:89] found id: "4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06"
	I1009 18:50:19.926800  304033 cri.go:89] found id: ""
	I1009 18:50:19.926809  304033 logs.go:282] 1 containers: [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06]
	I1009 18:50:19.926869  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:19.931070  304033 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:50:19.931146  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:50:19.970420  304033 cri.go:89] found id: "0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987"
	I1009 18:50:19.970442  304033 cri.go:89] found id: ""
	I1009 18:50:19.970450  304033 logs.go:282] 1 containers: [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987]
	I1009 18:50:19.970505  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:19.974056  304033 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:50:19.974130  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:50:20.034116  304033 cri.go:89] found id: "a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6"
	I1009 18:50:20.034145  304033 cri.go:89] found id: ""
	I1009 18:50:20.034155  304033 logs.go:282] 1 containers: [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6]
	I1009 18:50:20.034226  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:20.038516  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:50:20.038602  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:50:20.081871  304033 cri.go:89] found id: "5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f"
	I1009 18:50:20.081963  304033 cri.go:89] found id: ""
	I1009 18:50:20.082004  304033 logs.go:282] 1 containers: [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f]
	I1009 18:50:20.082135  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:20.086281  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:50:20.086481  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:50:20.132872  304033 cri.go:89] found id: "b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4"
	I1009 18:50:20.132899  304033 cri.go:89] found id: ""
	I1009 18:50:20.132949  304033 logs.go:282] 1 containers: [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4]
	I1009 18:50:20.133026  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:20.136819  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:50:20.136905  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:50:20.181617  304033 cri.go:89] found id: "d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6"
	I1009 18:50:20.181649  304033 cri.go:89] found id: ""
	I1009 18:50:20.181659  304033 logs.go:282] 1 containers: [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6]
	I1009 18:50:20.181727  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:20.185837  304033 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:50:20.185948  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:50:20.225997  304033 cri.go:89] found id: "658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4"
	I1009 18:50:20.226020  304033 cri.go:89] found id: ""
	I1009 18:50:20.226029  304033 logs.go:282] 1 containers: [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4]
	I1009 18:50:20.226106  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:20.229688  304033 logs.go:123] Gathering logs for kube-scheduler [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f] ...
	I1009 18:50:20.229716  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f"
	I1009 18:50:20.283311  304033 logs.go:123] Gathering logs for kube-controller-manager [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6] ...
	I1009 18:50:20.283341  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6"
	I1009 18:50:20.356643  304033 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:50:20.356688  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:50:20.453176  304033 logs.go:123] Gathering logs for dmesg ...
	I1009 18:50:20.453218  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:50:20.469529  304033 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:50:20.469558  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 18:50:20.613959  304033 logs.go:123] Gathering logs for kube-apiserver [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06] ...
	I1009 18:50:20.613990  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06"
	I1009 18:50:20.665515  304033 logs.go:123] Gathering logs for etcd [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987] ...
	I1009 18:50:20.665558  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987"
	I1009 18:50:20.710862  304033 logs.go:123] Gathering logs for coredns [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6] ...
	I1009 18:50:20.710894  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6"
	I1009 18:50:20.755271  304033 logs.go:123] Gathering logs for kube-proxy [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4] ...
	I1009 18:50:20.755301  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4"
	I1009 18:50:20.794138  304033 logs.go:123] Gathering logs for kindnet [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4] ...
	I1009 18:50:20.794168  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4"
	I1009 18:50:20.839676  304033 logs.go:123] Gathering logs for container status ...
	I1009 18:50:20.839705  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:50:20.912684  304033 logs.go:123] Gathering logs for kubelet ...
	I1009 18:50:20.912735  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 18:50:20.985478  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.018098    1507 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-527950' and this object
	W1009 18:50:20.985720  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.018147    1507 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:20.987574  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.075946    1507 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-527950' and this object
	W1009 18:50:20.987790  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.076004    1507 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:20.987983  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091197    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:20.988208  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091247    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:20.988400  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091297    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:20.988647  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091310    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	I1009 18:50:21.030574  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:50:21.030613  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 18:50:21.030677  304033 out.go:270] X Problems detected in kubelet:
	W1009 18:50:21.030693  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.076004    1507 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:21.030705  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091197    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:21.030714  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091247    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:21.030726  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091297    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:21.030733  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091310    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	I1009 18:50:21.030743  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:50:21.030749  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:50:31.043130  304033 system_pods.go:59] 18 kube-system pods found
	I1009 18:50:31.043171  304033 system_pods.go:61] "coredns-7c65d6cfc9-6xlwc" [7b657db0-d9a2-4b1b-b040-1594861c5187] Running
	I1009 18:50:31.043178  304033 system_pods.go:61] "csi-hostpath-attacher-0" [9033f355-551e-4197-9103-ea28436fc20f] Running
	I1009 18:50:31.043183  304033 system_pods.go:61] "csi-hostpath-resizer-0" [6c901c73-0f64-445c-b5bb-1d56a1df3871] Running
	I1009 18:50:31.043187  304033 system_pods.go:61] "csi-hostpathplugin-fhvs9" [c5ab2902-7c70-48f4-9c56-2648607461bc] Running
	I1009 18:50:31.043192  304033 system_pods.go:61] "etcd-addons-527950" [a5818fe5-761e-4c9f-b467-145d8ad3a673] Running
	I1009 18:50:31.043196  304033 system_pods.go:61] "kindnet-c47lt" [c4190aa2-5c00-4d57-b499-893009495d85] Running
	I1009 18:50:31.043200  304033 system_pods.go:61] "kube-apiserver-addons-527950" [6c23db94-a291-469a-96d9-268a10403749] Running
	I1009 18:50:31.043205  304033 system_pods.go:61] "kube-controller-manager-addons-527950" [fdde003f-f7c2-4183-82f0-f0ea5f90142c] Running
	I1009 18:50:31.043211  304033 system_pods.go:61] "kube-ingress-dns-minikube" [32f8c970-729d-4715-8ee2-58f3193bef35] Running
	I1009 18:50:31.043215  304033 system_pods.go:61] "kube-proxy-ffxxn" [de0273e2-e2c4-41f3-af01-1896f193ada7] Running
	I1009 18:50:31.043218  304033 system_pods.go:61] "kube-scheduler-addons-527950" [f3058758-766c-46c7-baf3-0b9adde14be4] Running
	I1009 18:50:31.043223  304033 system_pods.go:61] "metrics-server-84c5f94fbc-2rc87" [e64a4405-7389-449b-b03d-16e9b8fca7b6] Running
	I1009 18:50:31.043227  304033 system_pods.go:61] "nvidia-device-plugin-daemonset-frbq8" [b905a7fe-20fc-4877-8f83-6613af7e0f2b] Running
	I1009 18:50:31.043234  304033 system_pods.go:61] "registry-66c9cd494c-dqnph" [63b4033a-0f05-44d5-becd-204fc75b1b5c] Running
	I1009 18:50:31.043238  304033 system_pods.go:61] "registry-proxy-l7mmn" [2399aceb-9c2c-40ca-9f5f-edd537c9676d] Running
	I1009 18:50:31.043242  304033 system_pods.go:61] "snapshot-controller-56fcc65765-7tc9k" [3d03cc14-c35a-4734-a2d8-efffe0f29a73] Running
	I1009 18:50:31.043248  304033 system_pods.go:61] "snapshot-controller-56fcc65765-rvjvp" [0623d873-45a3-4d01-bdec-f4a397c4712e] Running
	I1009 18:50:31.043252  304033 system_pods.go:61] "storage-provisioner" [3c893131-8d79-4974-a73a-7dd25740dbf4] Running
	I1009 18:50:31.043257  304033 system_pods.go:74] duration metric: took 11.161043865s to wait for pod list to return data ...
	I1009 18:50:31.043268  304033 default_sa.go:34] waiting for default service account to be created ...
	I1009 18:50:31.045688  304033 default_sa.go:45] found service account: "default"
	I1009 18:50:31.045718  304033 default_sa.go:55] duration metric: took 2.444131ms for default service account to be created ...
	I1009 18:50:31.045729  304033 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 18:50:31.056292  304033 system_pods.go:86] 18 kube-system pods found
	I1009 18:50:31.056329  304033 system_pods.go:89] "coredns-7c65d6cfc9-6xlwc" [7b657db0-d9a2-4b1b-b040-1594861c5187] Running
	I1009 18:50:31.056338  304033 system_pods.go:89] "csi-hostpath-attacher-0" [9033f355-551e-4197-9103-ea28436fc20f] Running
	I1009 18:50:31.056367  304033 system_pods.go:89] "csi-hostpath-resizer-0" [6c901c73-0f64-445c-b5bb-1d56a1df3871] Running
	I1009 18:50:31.056380  304033 system_pods.go:89] "csi-hostpathplugin-fhvs9" [c5ab2902-7c70-48f4-9c56-2648607461bc] Running
	I1009 18:50:31.056386  304033 system_pods.go:89] "etcd-addons-527950" [a5818fe5-761e-4c9f-b467-145d8ad3a673] Running
	I1009 18:50:31.056395  304033 system_pods.go:89] "kindnet-c47lt" [c4190aa2-5c00-4d57-b499-893009495d85] Running
	I1009 18:50:31.056400  304033 system_pods.go:89] "kube-apiserver-addons-527950" [6c23db94-a291-469a-96d9-268a10403749] Running
	I1009 18:50:31.056406  304033 system_pods.go:89] "kube-controller-manager-addons-527950" [fdde003f-f7c2-4183-82f0-f0ea5f90142c] Running
	I1009 18:50:31.056411  304033 system_pods.go:89] "kube-ingress-dns-minikube" [32f8c970-729d-4715-8ee2-58f3193bef35] Running
	I1009 18:50:31.056419  304033 system_pods.go:89] "kube-proxy-ffxxn" [de0273e2-e2c4-41f3-af01-1896f193ada7] Running
	I1009 18:50:31.056423  304033 system_pods.go:89] "kube-scheduler-addons-527950" [f3058758-766c-46c7-baf3-0b9adde14be4] Running
	I1009 18:50:31.056434  304033 system_pods.go:89] "metrics-server-84c5f94fbc-2rc87" [e64a4405-7389-449b-b03d-16e9b8fca7b6] Running
	I1009 18:50:31.056443  304033 system_pods.go:89] "nvidia-device-plugin-daemonset-frbq8" [b905a7fe-20fc-4877-8f83-6613af7e0f2b] Running
	I1009 18:50:31.056448  304033 system_pods.go:89] "registry-66c9cd494c-dqnph" [63b4033a-0f05-44d5-becd-204fc75b1b5c] Running
	I1009 18:50:31.056455  304033 system_pods.go:89] "registry-proxy-l7mmn" [2399aceb-9c2c-40ca-9f5f-edd537c9676d] Running
	I1009 18:50:31.056462  304033 system_pods.go:89] "snapshot-controller-56fcc65765-7tc9k" [3d03cc14-c35a-4734-a2d8-efffe0f29a73] Running
	I1009 18:50:31.056467  304033 system_pods.go:89] "snapshot-controller-56fcc65765-rvjvp" [0623d873-45a3-4d01-bdec-f4a397c4712e] Running
	I1009 18:50:31.056477  304033 system_pods.go:89] "storage-provisioner" [3c893131-8d79-4974-a73a-7dd25740dbf4] Running
	I1009 18:50:31.056485  304033 system_pods.go:126] duration metric: took 10.749976ms to wait for k8s-apps to be running ...
	I1009 18:50:31.056497  304033 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 18:50:31.056555  304033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:50:31.069211  304033 system_svc.go:56] duration metric: took 12.704113ms WaitForService to wait for kubelet
	I1009 18:50:31.069241  304033 kubeadm.go:582] duration metric: took 2m55.6382755s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:50:31.069261  304033 node_conditions.go:102] verifying NodePressure condition ...
	I1009 18:50:31.072390  304033 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 18:50:31.072424  304033 node_conditions.go:123] node cpu capacity is 2
	I1009 18:50:31.072435  304033 node_conditions.go:105] duration metric: took 3.168872ms to run NodePressure ...
	I1009 18:50:31.072448  304033 start.go:241] waiting for startup goroutines ...
	I1009 18:50:31.072455  304033 start.go:246] waiting for cluster config update ...
	I1009 18:50:31.072475  304033 start.go:255] writing updated cluster config ...
	I1009 18:50:31.072768  304033 ssh_runner.go:195] Run: rm -f paused
	I1009 18:50:31.436305  304033 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 18:50:31.438854  304033 out.go:177] * Done! kubectl is now configured to use "addons-527950" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:01:30 addons-527950 crio[969]: time="2024-10-09 19:01:30.648436321Z" level=info msg="Removed pod sandbox: 45dd0e09568ba8ccada3417f0ea00ffc23fab6e81911c1aa156a787c1e657197" id=c797b8a5-c0be-4371-a3de-b9bf945b5caf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:03:38 addons-527950 crio[969]: time="2024-10-09 19:03:38.111301753Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-5xw8s/POD" id=4a088aa0-d805-45a2-9825-4f710a8d088b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:03:38 addons-527950 crio[969]: time="2024-10-09 19:03:38.111366860Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 09 19:03:38 addons-527950 crio[969]: time="2024-10-09 19:03:38.148999191Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-5xw8s Namespace:default ID:8bba71988dce55d1a6403ec95a541e1744feadd8ab021de72a606045df1d3f83 UID:49fb9e1c-ef53-49f3-b595-e80c8e2e8c83 NetNS:/var/run/netns/ab127a3b-2312-4e2c-b772-239a1bdc89aa Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 09 19:03:38 addons-527950 crio[969]: time="2024-10-09 19:03:38.149074013Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-5xw8s to CNI network \"kindnet\" (type=ptp)"
	Oct 09 19:03:38 addons-527950 crio[969]: time="2024-10-09 19:03:38.173469243Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-5xw8s Namespace:default ID:8bba71988dce55d1a6403ec95a541e1744feadd8ab021de72a606045df1d3f83 UID:49fb9e1c-ef53-49f3-b595-e80c8e2e8c83 NetNS:/var/run/netns/ab127a3b-2312-4e2c-b772-239a1bdc89aa Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 09 19:03:38 addons-527950 crio[969]: time="2024-10-09 19:03:38.173638652Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-5xw8s for CNI network kindnet (type=ptp)"
	Oct 09 19:03:38 addons-527950 crio[969]: time="2024-10-09 19:03:38.178760780Z" level=info msg="Ran pod sandbox 8bba71988dce55d1a6403ec95a541e1744feadd8ab021de72a606045df1d3f83 with infra container: default/hello-world-app-55bf9c44b4-5xw8s/POD" id=4a088aa0-d805-45a2-9825-4f710a8d088b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 19:03:38 addons-527950 crio[969]: time="2024-10-09 19:03:38.179944657Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=94b86455-3f2b-4473-86a6-fca385fbc6c8 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:03:38 addons-527950 crio[969]: time="2024-10-09 19:03:38.180175948Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=94b86455-3f2b-4473-86a6-fca385fbc6c8 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:03:38 addons-527950 crio[969]: time="2024-10-09 19:03:38.181198917Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=fdcae1cb-977a-45b1-82dd-f0e319924ef4 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:03:38 addons-527950 crio[969]: time="2024-10-09 19:03:38.187817483Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 09 19:03:38 addons-527950 crio[969]: time="2024-10-09 19:03:38.495614846Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 09 19:03:39 addons-527950 crio[969]: time="2024-10-09 19:03:39.306127918Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=fdcae1cb-977a-45b1-82dd-f0e319924ef4 name=/runtime.v1.ImageService/PullImage
	Oct 09 19:03:39 addons-527950 crio[969]: time="2024-10-09 19:03:39.307031110Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=928a584a-c200-4f2d-9912-dda0e2ee7e09 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:03:39 addons-527950 crio[969]: time="2024-10-09 19:03:39.307721045Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=928a584a-c200-4f2d-9912-dda0e2ee7e09 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:03:39 addons-527950 crio[969]: time="2024-10-09 19:03:39.309463429Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=106a96bd-f4d9-4b3a-be08-069230126c64 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:03:39 addons-527950 crio[969]: time="2024-10-09 19:03:39.310132950Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=106a96bd-f4d9-4b3a-be08-069230126c64 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:03:39 addons-527950 crio[969]: time="2024-10-09 19:03:39.311212008Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-5xw8s/hello-world-app" id=e75c74ab-2db6-4f90-aa99-9504ac217592 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:03:39 addons-527950 crio[969]: time="2024-10-09 19:03:39.311303469Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 09 19:03:39 addons-527950 crio[969]: time="2024-10-09 19:03:39.338605884Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d7027bfdd31e7285e75310351760d05092ca1586aca02860059f0c4166c5cdf6/merged/etc/passwd: no such file or directory"
	Oct 09 19:03:39 addons-527950 crio[969]: time="2024-10-09 19:03:39.338798661Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d7027bfdd31e7285e75310351760d05092ca1586aca02860059f0c4166c5cdf6/merged/etc/group: no such file or directory"
	Oct 09 19:03:39 addons-527950 crio[969]: time="2024-10-09 19:03:39.383977472Z" level=info msg="Created container 98d5082f8a293409ccad152875f2f1968359ba3bc45c421e7f539ee3b6124da8: default/hello-world-app-55bf9c44b4-5xw8s/hello-world-app" id=e75c74ab-2db6-4f90-aa99-9504ac217592 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:03:39 addons-527950 crio[969]: time="2024-10-09 19:03:39.384760649Z" level=info msg="Starting container: 98d5082f8a293409ccad152875f2f1968359ba3bc45c421e7f539ee3b6124da8" id=f5db59ad-3bd2-445a-9ab3-b331cb921473 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:03:39 addons-527950 crio[969]: time="2024-10-09 19:03:39.394477774Z" level=info msg="Started container" PID=14806 containerID=98d5082f8a293409ccad152875f2f1968359ba3bc45c421e7f539ee3b6124da8 description=default/hello-world-app-55bf9c44b4-5xw8s/hello-world-app id=f5db59ad-3bd2-445a-9ab3-b331cb921473 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8bba71988dce55d1a6403ec95a541e1744feadd8ab021de72a606045df1d3f83
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	98d5082f8a293       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   8bba71988dce5       hello-world-app-55bf9c44b4-5xw8s
	b18b65bf88187       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago            Running             busybox                   0                   14247c109f31e       busybox
	f02ff73153e5e       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago            Running             nginx                     0                   92fd777867eab       nginx
	8c0dbf1ad1c32       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             14 minutes ago           Running             controller                0                   321c68f923445       ingress-nginx-controller-bc57996ff-mw22v
	faf91b9af24fc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   14 minutes ago           Exited              patch                     0                   e4abe729152df       ingress-nginx-admission-patch-h88zl
	ffbe4cd106a2b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   14 minutes ago           Exited              create                    0                   e577c1f75939e       ingress-nginx-admission-create-4xppt
	b67d45aed99f6       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        14 minutes ago           Running             metrics-server            0                   524107a4c0848       metrics-server-84c5f94fbc-2rc87
	5399ae6ba7590       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             15 minutes ago           Running             minikube-ingress-dns      0                   e6f0ed9c30165       kube-ingress-dns-minikube
	a03317a4b67c6       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             15 minutes ago           Running             coredns                   0                   d0da5a2170e78       coredns-7c65d6cfc9-6xlwc
	799e144a48722       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             15 minutes ago           Running             storage-provisioner       0                   9c95e646a7d47       storage-provisioner
	658aec82c4852       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387                           15 minutes ago           Running             kindnet-cni               0                   c45520dd5f0ea       kindnet-c47lt
	b83030b4e3f98       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             16 minutes ago           Running             kube-proxy                0                   a06994e24d95d       kube-proxy-ffxxn
	5f813d31843f7       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             16 minutes ago           Running             kube-scheduler            0                   26277e30ab828       kube-scheduler-addons-527950
	4f5e96ddc0d0d       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             16 minutes ago           Running             kube-apiserver            0                   ab714852cadcb       kube-apiserver-addons-527950
	0217e145b7d2d       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             16 minutes ago           Running             etcd                      0                   e6459e5817828       etcd-addons-527950
	d67726f6c9dd2       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             16 minutes ago           Running             kube-controller-manager   0                   b8081ea9d38e8       kube-controller-manager-addons-527950
	
	
	==> coredns [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6] <==
	[INFO] 10.244.0.11:46964 - 35684 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002828774s
	[INFO] 10.244.0.11:46964 - 51490 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000134923s
	[INFO] 10.244.0.11:46964 - 35724 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.006275703s
	[INFO] 10.244.0.11:53459 - 31746 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000217917s
	[INFO] 10.244.0.11:53459 - 31492 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000083659s
	[INFO] 10.244.0.11:51918 - 53553 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000054736s
	[INFO] 10.244.0.11:51918 - 53365 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000033082s
	[INFO] 10.244.0.11:55825 - 56025 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052389s
	[INFO] 10.244.0.11:55825 - 55605 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060922s
	[INFO] 10.244.0.11:56129 - 5172 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001748248s
	[INFO] 10.244.0.11:56129 - 4975 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001739699s
	[INFO] 10.244.0.11:45409 - 48223 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054687s
	[INFO] 10.244.0.11:45409 - 48084 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000043241s
	[INFO] 10.244.0.20:45173 - 62921 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00016657s
	[INFO] 10.244.0.20:57378 - 8570 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000120523s
	[INFO] 10.244.0.20:40578 - 30447 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118956s
	[INFO] 10.244.0.20:35446 - 2726 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084832s
	[INFO] 10.244.0.20:49789 - 50331 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097369s
	[INFO] 10.244.0.20:57856 - 7928 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007373s
	[INFO] 10.244.0.20:38045 - 21587 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002249229s
	[INFO] 10.244.0.20:59829 - 34278 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002753057s
	[INFO] 10.244.0.20:40985 - 13344 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001853617s
	[INFO] 10.244.0.20:39404 - 43995 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001931646s
	[INFO] 10.244.0.23:39404 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000174578s
	[INFO] 10.244.0.23:33578 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000172059s
	
	
	==> describe nodes <==
	Name:               addons-527950
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-527950
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=addons-527950
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T18_47_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-527950
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 18:47:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-527950
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:03:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:01:37 +0000   Wed, 09 Oct 2024 18:47:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:01:37 +0000   Wed, 09 Oct 2024 18:47:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:01:37 +0000   Wed, 09 Oct 2024 18:47:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:01:37 +0000   Wed, 09 Oct 2024 18:48:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-527950
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1a4c86c9e134492be73c30935e3dc26
	  System UUID:                cd82509e-5289-4bc4-999f-dab5bb62981a
	  Boot ID:                    0eb94caa-53b6-43b0-a9b7-c0b1f1bd6146
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-5xw8s            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-mw22v    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-6xlwc                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-addons-527950                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-c47lt                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-addons-527950                250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-527950       200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-ffxxn                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-527950                100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-84c5f94fbc-2rc87             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         15m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 15m   kube-proxy       
	  Normal   Starting                 16m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  16m   kubelet          Node addons-527950 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m   kubelet          Node addons-527950 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m   kubelet          Node addons-527950 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m   node-controller  Node addons-527950 event: Registered Node addons-527950 in Controller
	  Normal   NodeReady                15m   kubelet          Node addons-527950 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015488] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.459769] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.052269] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.016609] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.591205] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.456398] kauditd_printk_skb: 34 callbacks suppressed
	[Oct 9 17:28] hrtimer: interrupt took 5876350 ns
	[Oct 9 17:53] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987] <==
	{"level":"info","ts":"2024-10-09T18:47:23.667994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-09T18:47:23.668044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-10-09T18:47:23.668089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-10-09T18:47:23.668123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-09T18:47:23.668161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-10-09T18:47:23.668196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-09T18:47:23.671841Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:47:23.673734Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-527950 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T18:47:23.675856Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T18:47:23.675880Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:47:23.676011Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:47:23.676073Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:47:23.676129Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T18:47:23.676163Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T18:47:23.676198Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T18:47:23.676869Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T18:47:23.677798Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-09T18:47:23.676869Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T18:47:23.685106Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-10-09T18:57:25.505410Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1490}
	{"level":"info","ts":"2024-10-09T18:57:25.536695Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1490,"took":"30.721598ms","hash":3020432765,"current-db-size-bytes":6037504,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3137536,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2024-10-09T18:57:25.536750Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3020432765,"revision":1490,"compact-revision":-1}
	{"level":"info","ts":"2024-10-09T19:02:25.510882Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1909}
	{"level":"info","ts":"2024-10-09T19:02:25.528241Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1909,"took":"16.748567ms","hash":2211837678,"current-db-size-bytes":6037504,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":4411392,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-10-09T19:02:25.528297Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2211837678,"revision":1909,"compact-revision":1490}
	
	
	==> kernel <==
	 19:03:39 up  2:46,  0 users,  load average: 0.28, 0.40, 0.59
	Linux addons-527950 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4] <==
	I1009 19:01:31.623195       1 main.go:300] handling current node
	I1009 19:01:41.621345       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:01:41.621377       1 main.go:300] handling current node
	I1009 19:01:51.623080       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:01:51.623218       1 main.go:300] handling current node
	I1009 19:02:01.623762       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:02:01.623793       1 main.go:300] handling current node
	I1009 19:02:11.624805       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:02:11.624837       1 main.go:300] handling current node
	I1009 19:02:21.620831       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:02:21.620868       1 main.go:300] handling current node
	I1009 19:02:31.624175       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:02:31.624295       1 main.go:300] handling current node
	I1009 19:02:41.621809       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:02:41.621841       1 main.go:300] handling current node
	I1009 19:02:51.623377       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:02:51.623488       1 main.go:300] handling current node
	I1009 19:03:01.628045       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:01.628078       1 main.go:300] handling current node
	I1009 19:03:11.623927       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:11.623960       1 main.go:300] handling current node
	I1009 19:03:21.620826       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:21.620859       1 main.go:300] handling current node
	I1009 19:03:31.621761       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:31.621794       1 main.go:300] handling current node
	
	
	==> kube-apiserver [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06] <==
	I1009 18:58:45.052136       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.239.160"}
	E1009 18:59:18.858910       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1009 18:59:19.632162       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1009 18:59:19.642659       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1009 18:59:19.652821       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1009 18:59:34.655531       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1009 19:00:26.824119       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1009 19:00:58.167589       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 19:00:58.167650       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 19:00:58.201909       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 19:00:58.202034       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 19:00:58.223056       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 19:00:58.224118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 19:00:58.305794       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 19:00:58.305915       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 19:00:58.441489       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 19:00:58.441615       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1009 19:00:59.305838       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1009 19:00:59.442058       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1009 19:00:59.446587       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1009 19:01:12.023912       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1009 19:01:13.068312       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1009 19:01:17.583406       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1009 19:01:17.871534       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.211.191"}
	I1009 19:03:38.043333       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.71.197"}
	
	
	==> kube-controller-manager [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6] <==
	E1009 19:02:04.509207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:09.180782       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:09.180826       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:18.768506       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:18.768549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:32.566066       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:32.566110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:51.031318       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:51.031455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:55.076404       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:55.076451       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:03:03.474325       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:03:03.474371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:03:16.151979       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:03:16.152022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:03:24.488670       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:03:24.488713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:03:30.160757       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:03:30.160824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1009 19:03:37.811091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.542888ms"
	I1009 19:03:37.823700       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.096765ms"
	I1009 19:03:37.824477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="51.282µs"
	I1009 19:03:37.830895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.251µs"
	I1009 19:03:39.717009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.391063ms"
	I1009 19:03:39.717820       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="32.968µs"
	
	
	==> kube-proxy [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4] <==
	I1009 18:47:42.077293       1 server_linux.go:66] "Using iptables proxy"
	I1009 18:47:42.418800       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1009 18:47:42.419065       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:47:42.511587       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 18:47:42.511728       1 server_linux.go:169] "Using iptables Proxier"
	I1009 18:47:42.514954       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:47:42.517983       1 server.go:483] "Version info" version="v1.31.1"
	I1009 18:47:42.518088       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:47:42.519931       1 config.go:199] "Starting service config controller"
	I1009 18:47:42.520031       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 18:47:42.520066       1 config.go:105] "Starting endpoint slice config controller"
	I1009 18:47:42.520071       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 18:47:42.520890       1 config.go:328] "Starting node config controller"
	I1009 18:47:42.520952       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 18:47:42.621925       1 shared_informer.go:320] Caches are synced for node config
	I1009 18:47:42.637600       1 shared_informer.go:320] Caches are synced for service config
	I1009 18:47:42.637637       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f] <==
	W1009 18:47:27.693753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 18:47:27.696664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:27.693786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 18:47:27.696695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:27.693820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 18:47:27.696734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:27.693888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 18:47:27.696753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.549655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 18:47:28.549702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.677965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 18:47:28.678019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.686047       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 18:47:28.686090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.693367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 18:47:28.693412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.699003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 18:47:28.699047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.718753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 18:47:28.718866       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.809951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 18:47:28.810070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.817852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 18:47:28.817945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1009 18:47:29.278732       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 19:02:00 addons-527950 kubelet[1507]: E1009 19:02:00.619310    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500520618960251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:00 addons-527950 kubelet[1507]: E1009 19:02:00.619357    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500520618960251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:10 addons-527950 kubelet[1507]: E1009 19:02:10.621966    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500530621709073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:10 addons-527950 kubelet[1507]: E1009 19:02:10.622009    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500530621709073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:20 addons-527950 kubelet[1507]: E1009 19:02:20.625236    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500540624988542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:20 addons-527950 kubelet[1507]: E1009 19:02:20.625280    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500540624988542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:29 addons-527950 kubelet[1507]: I1009 19:02:29.303475    1507 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 19:02:30 addons-527950 kubelet[1507]: E1009 19:02:30.307528    1507 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553, memory: /docker/2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553/system.slice/kubelet.service"
	Oct 09 19:02:30 addons-527950 kubelet[1507]: E1009 19:02:30.627868    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500550627569317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:30 addons-527950 kubelet[1507]: E1009 19:02:30.627904    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500550627569317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:40 addons-527950 kubelet[1507]: E1009 19:02:40.630509    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500560630241639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:40 addons-527950 kubelet[1507]: E1009 19:02:40.630544    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500560630241639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:50 addons-527950 kubelet[1507]: E1009 19:02:50.633663    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500570633404827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:50 addons-527950 kubelet[1507]: E1009 19:02:50.633700    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500570633404827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:00 addons-527950 kubelet[1507]: E1009 19:03:00.636934    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500580636693167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:00 addons-527950 kubelet[1507]: E1009 19:03:00.636987    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500580636693167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:10 addons-527950 kubelet[1507]: E1009 19:03:10.640251    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500590640034740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:10 addons-527950 kubelet[1507]: E1009 19:03:10.640291    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500590640034740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:20 addons-527950 kubelet[1507]: E1009 19:03:20.642598    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500600642345825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:20 addons-527950 kubelet[1507]: E1009 19:03:20.642637    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500600642345825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:30 addons-527950 kubelet[1507]: E1009 19:03:30.645686    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500610645434251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:30 addons-527950 kubelet[1507]: E1009 19:03:30.645725    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500610645434251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597891,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:37 addons-527950 kubelet[1507]: I1009 19:03:37.808995    1507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=130.987625133 podStartE2EDuration="13m5.808978357s" podCreationTimestamp="2024-10-09 18:50:32 +0000 UTC" firstStartedPulling="2024-10-09 18:50:32.534760519 +0000 UTC m=+182.436632936" lastFinishedPulling="2024-10-09 19:01:27.356113743 +0000 UTC m=+837.257986160" observedRunningTime="2024-10-09 19:01:28.448328623 +0000 UTC m=+838.350201049" watchObservedRunningTime="2024-10-09 19:03:37.808978357 +0000 UTC m=+967.710850774"
	Oct 09 19:03:37 addons-527950 kubelet[1507]: I1009 19:03:37.905816    1507 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t56tv\" (UniqueName: \"kubernetes.io/projected/49fb9e1c-ef53-49f3-b595-e80c8e2e8c83-kube-api-access-t56tv\") pod \"hello-world-app-55bf9c44b4-5xw8s\" (UID: \"49fb9e1c-ef53-49f3-b595-e80c8e2e8c83\") " pod="default/hello-world-app-55bf9c44b4-5xw8s"
	Oct 09 19:03:39 addons-527950 kubelet[1507]: I1009 19:03:39.304213    1507 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [799e144a48722ce5cda01719fdb32be7ddb6725e7b6e5f91aac8ca8c0dcde633] <==
	I1009 18:48:22.888446       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 18:48:22.929675       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 18:48:22.929750       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 18:48:22.949509       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 18:48:22.949676       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-527950_64395e50-c268-4440-b675-d6757c097dd6!
	I1009 18:48:22.964178       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2bc5529d-9755-4c7c-824d-aeaa87ec6d9e", APIVersion:"v1", ResourceVersion:"873", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-527950_64395e50-c268-4440-b675-d6757c097dd6 became leader
	I1009 18:48:23.050065       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-527950_64395e50-c268-4440-b675-d6757c097dd6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-527950 -n addons-527950
helpers_test.go:261: (dbg) Run:  kubectl --context addons-527950 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-4xppt ingress-nginx-admission-patch-h88zl
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-527950 describe pod ingress-nginx-admission-create-4xppt ingress-nginx-admission-patch-h88zl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-527950 describe pod ingress-nginx-admission-create-4xppt ingress-nginx-admission-patch-h88zl: exit status 1 (88.613508ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4xppt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h88zl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-527950 describe pod ingress-nginx-admission-create-4xppt ingress-nginx-admission-patch-h88zl: exit status 1
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-527950 addons disable ingress-dns --alsologtostderr -v=1: (1.67054651s)
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 addons disable ingress --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-527950 addons disable ingress --alsologtostderr -v=1: (7.790237368s)
--- FAIL: TestAddons/parallel/Ingress (153.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (336.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 6.224528ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-2rc87" [e64a4405-7389-449b-b03d-16e9b8fca7b6] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003726402s
addons_test.go:402: (dbg) Run:  kubectl --context addons-527950 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-527950 top pods -n kube-system: exit status 1 (100.924143ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6xlwc, age: 11m49.959266883s

                                                
                                                
** /stderr **
I1009 18:59:24.962399  303278 retry.go:31] will retry after 3.331371608s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-527950 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-527950 top pods -n kube-system: exit status 1 (93.983892ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6xlwc, age: 11m53.385482143s

                                                
                                                
** /stderr **
I1009 18:59:28.389161  303278 retry.go:31] will retry after 3.600422896s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-527950 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-527950 top pods -n kube-system: exit status 1 (96.723038ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6xlwc, age: 11m57.084847772s

                                                
                                                
** /stderr **
I1009 18:59:32.088548  303278 retry.go:31] will retry after 4.869606695s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-527950 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-527950 top pods -n kube-system: exit status 1 (107.015243ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6xlwc, age: 12m2.066059354s

                                                
                                                
** /stderr **
I1009 18:59:37.069302  303278 retry.go:31] will retry after 12.199736783s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-527950 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-527950 top pods -n kube-system: exit status 1 (101.173135ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6xlwc, age: 12m14.372665618s

                                                
                                                
** /stderr **
I1009 18:59:49.375669  303278 retry.go:31] will retry after 19.908479033s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-527950 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-527950 top pods -n kube-system: exit status 1 (98.380741ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6xlwc, age: 12m34.379784713s

                                                
                                                
** /stderr **
I1009 19:00:09.382867  303278 retry.go:31] will retry after 18.110752525s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-527950 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-527950 top pods -n kube-system: exit status 1 (98.876605ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6xlwc, age: 12m52.590256109s

                                                
                                                
** /stderr **
I1009 19:00:27.593269  303278 retry.go:31] will retry after 39.815967764s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-527950 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-527950 top pods -n kube-system: exit status 1 (84.01709ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6xlwc, age: 13m32.490311491s

                                                
                                                
** /stderr **
I1009 19:01:07.493560  303278 retry.go:31] will retry after 36.369903153s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-527950 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-527950 top pods -n kube-system: exit status 1 (92.007127ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6xlwc, age: 14m8.953451728s

                                                
                                                
** /stderr **
I1009 19:01:43.956355  303278 retry.go:31] will retry after 1m10.408173861s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-527950 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-527950 top pods -n kube-system: exit status 1 (92.878216ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6xlwc, age: 15m19.453870667s

                                                
                                                
** /stderr **
I1009 19:02:54.457803  303278 retry.go:31] will retry after 30.933491364s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-527950 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-527950 top pods -n kube-system: exit status 1 (85.783342ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6xlwc, age: 15m50.474814478s

                                                
                                                
** /stderr **
I1009 19:03:25.477848  303278 retry.go:31] will retry after 1m26.511427581s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-527950 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-527950 top pods -n kube-system: exit status 1 (94.830893ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6xlwc, age: 17m17.08817652s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-527950
helpers_test.go:235: (dbg) docker inspect addons-527950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553",
	        "Created": "2024-10-09T18:47:05.648185716Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-09T18:47:05.802356014Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553/hostname",
	        "HostsPath": "/var/lib/docker/containers/2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553/hosts",
	        "LogPath": "/var/lib/docker/containers/2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553/2dc08be679301c5850c16e37ce9df7c7dbdf9b92e0e391cfe20e45617b988553-json.log",
	        "Name": "/addons-527950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-527950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-527950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/57d52d8957517c47a94d772a4ec553e1973f6b4b31859c22ead328b38e49865f-init/diff:/var/lib/docker/overlay2/32ad11673c72cdd61b2cbdcf2c702ee1fe66adabc05fc451cdf50fb47fc60aee/diff",
	                "MergedDir": "/var/lib/docker/overlay2/57d52d8957517c47a94d772a4ec553e1973f6b4b31859c22ead328b38e49865f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/57d52d8957517c47a94d772a4ec553e1973f6b4b31859c22ead328b38e49865f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/57d52d8957517c47a94d772a4ec553e1973f6b4b31859c22ead328b38e49865f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-527950",
	                "Source": "/var/lib/docker/volumes/addons-527950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-527950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-527950",
	                "name.minikube.sigs.k8s.io": "addons-527950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4df18c8347fba75af91bc3ac819789f4ff4d4b035ff43dd1ad95716976a94617",
	            "SandboxKey": "/var/run/docker/netns/4df18c8347fb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-527950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "764ce54092e8dc7bf53c454d0b923423aa19daaa0febb9374339d91784840cb0",
	                    "EndpointID": "edab63be22d86ee9116a85700006d56d2f146a683906f178f24b04f3444ce549",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-527950",
	                        "2dc08be67930"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-527950 -n addons-527950
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-527950 logs -n 25: (1.389192051s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-506477 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | download-docker-506477                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-506477                                                                   | download-docker-506477 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-492289   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | binary-mirror-492289                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35183                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-492289                                                                     | binary-mirror-492289   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| addons  | disable dashboard -p                                                                        | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | addons-527950                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | addons-527950                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-527950 --wait=true                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:50 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:50 UTC | 09 Oct 24 18:50 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | -p addons-527950                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:59 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-527950 ip                                                                            | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-527950 addons                                                                        | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-527950 ssh cat                                                                       | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | /opt/local-path-provisioner/pvc-78e4294a-ee74-4947-a0a7-ae40d0f13e44_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-527950 addons                                                                        | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 19:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-527950 addons                                                                        | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 19:00 UTC | 09 Oct 24 19:00 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-527950 addons                                                                        | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 19:00 UTC | 09 Oct 24 19:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-527950 addons                                                                        | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 19:01 UTC | 09 Oct 24 19:01 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-527950 ssh curl -s                                                                   | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 19:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-527950 ip                                                                            | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 19:03 UTC | 09 Oct 24 19:03 UTC |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 19:03 UTC | 09 Oct 24 19:03 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-527950 addons disable                                                                | addons-527950          | jenkins | v1.34.0 | 09 Oct 24 19:03 UTC | 09 Oct 24 19:03 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:46:40
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:46:40.967386  304033 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:46:40.967587  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:40.967614  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:46:40.967632  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:40.967935  304033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
	I1009 18:46:40.968422  304033 out.go:352] Setting JSON to false
	I1009 18:46:40.969276  304033 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8948,"bootTime":1728490653,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 18:46:40.969376  304033 start.go:139] virtualization:  
	I1009 18:46:40.972220  304033 out.go:177] * [addons-527950] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1009 18:46:40.974908  304033 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 18:46:40.974957  304033 notify.go:220] Checking for updates...
	I1009 18:46:40.977578  304033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:46:40.979594  304033 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig
	I1009 18:46:40.981442  304033 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube
	I1009 18:46:40.983587  304033 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 18:46:40.985813  304033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:46:40.987817  304033 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:46:41.013393  304033 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 18:46:41.013526  304033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:41.080358  304033 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-09 18:46:41.070449123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:41.080468  304033 docker.go:318] overlay module found
	I1009 18:46:41.082663  304033 out.go:177] * Using the docker driver based on user configuration
	I1009 18:46:41.084439  304033 start.go:297] selected driver: docker
	I1009 18:46:41.084461  304033 start.go:901] validating driver "docker" against <nil>
	I1009 18:46:41.084476  304033 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:46:41.085095  304033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:41.134759  304033 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-09 18:46:41.124919639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:41.134963  304033 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:46:41.135184  304033 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:46:41.137132  304033 out.go:177] * Using Docker driver with root privileges
	I1009 18:46:41.139121  304033 cni.go:84] Creating CNI manager for ""
	I1009 18:46:41.139197  304033 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:46:41.139211  304033 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:46:41.139304  304033 start.go:340] cluster config:
	{Name:addons-527950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-527950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:46:41.141358  304033 out.go:177] * Starting "addons-527950" primary control-plane node in "addons-527950" cluster
	I1009 18:46:41.143582  304033 cache.go:121] Beginning downloading kic base image for docker with crio
	I1009 18:46:41.145591  304033 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1009 18:46:41.147426  304033 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:46:41.147480  304033 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1009 18:46:41.147493  304033 cache.go:56] Caching tarball of preloaded images
	I1009 18:46:41.147491  304033 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1009 18:46:41.147575  304033 preload.go:172] Found /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 18:46:41.147585  304033 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 18:46:41.148086  304033 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/config.json ...
	I1009 18:46:41.148129  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/config.json: {Name:mk2ee10dbe477e541f5b1df0f33b07ee974c06c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:41.161424  304033 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1009 18:46:41.161534  304033 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1009 18:46:41.161554  304033 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1009 18:46:41.161560  304033 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1009 18:46:41.161567  304033 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1009 18:46:41.161572  304033 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from local cache
	I1009 18:46:58.564330  304033 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from cached tarball
	I1009 18:46:58.564369  304033 cache.go:194] Successfully downloaded all kic artifacts
	I1009 18:46:58.564399  304033 start.go:360] acquireMachinesLock for addons-527950: {Name:mk47047584b5ff43fa0debdcf458de7b2e027c65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:46:58.564517  304033 start.go:364] duration metric: took 94.834µs to acquireMachinesLock for "addons-527950"
	I1009 18:46:58.564549  304033 start.go:93] Provisioning new machine with config: &{Name:addons-527950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-527950 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:46:58.564621  304033 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:46:58.567508  304033 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1009 18:46:58.567771  304033 start.go:159] libmachine.API.Create for "addons-527950" (driver="docker")
	I1009 18:46:58.567807  304033 client.go:168] LocalClient.Create starting
	I1009 18:46:58.567942  304033 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca.pem
	I1009 18:46:58.681112  304033 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/cert.pem
	I1009 18:46:59.329402  304033 cli_runner.go:164] Run: docker network inspect addons-527950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:46:59.344830  304033 cli_runner.go:211] docker network inspect addons-527950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:46:59.344915  304033 network_create.go:284] running [docker network inspect addons-527950] to gather additional debugging logs...
	I1009 18:46:59.344935  304033 cli_runner.go:164] Run: docker network inspect addons-527950
	W1009 18:46:59.361259  304033 cli_runner.go:211] docker network inspect addons-527950 returned with exit code 1
	I1009 18:46:59.361290  304033 network_create.go:287] error running [docker network inspect addons-527950]: docker network inspect addons-527950: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-527950 not found
	I1009 18:46:59.361305  304033 network_create.go:289] output of [docker network inspect addons-527950]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-527950 not found
	
	** /stderr **
	I1009 18:46:59.361412  304033 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:46:59.378570  304033 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c6740}
	I1009 18:46:59.378626  304033 network_create.go:124] attempt to create docker network addons-527950 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:46:59.378691  304033 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-527950 addons-527950
	I1009 18:46:59.446268  304033 network_create.go:108] docker network addons-527950 192.168.49.0/24 created
	I1009 18:46:59.446303  304033 kic.go:121] calculated static IP "192.168.49.2" for the "addons-527950" container
	I1009 18:46:59.446378  304033 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:46:59.461021  304033 cli_runner.go:164] Run: docker volume create addons-527950 --label name.minikube.sigs.k8s.io=addons-527950 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:46:59.476583  304033 oci.go:103] Successfully created a docker volume addons-527950
	I1009 18:46:59.476682  304033 cli_runner.go:164] Run: docker run --rm --name addons-527950-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-527950 --entrypoint /usr/bin/test -v addons-527950:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1009 18:47:01.523666  304033 cli_runner.go:217] Completed: docker run --rm --name addons-527950-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-527950 --entrypoint /usr/bin/test -v addons-527950:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib: (2.04694108s)
	I1009 18:47:01.523699  304033 oci.go:107] Successfully prepared a docker volume addons-527950
	I1009 18:47:01.523720  304033 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:47:01.523740  304033 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:47:01.523809  304033 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-527950:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:47:05.581175  304033 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-527950:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.05728277s)
	I1009 18:47:05.581212  304033 kic.go:203] duration metric: took 4.057468754s to extract preloaded images to volume ...
	W1009 18:47:05.581352  304033 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 18:47:05.581464  304033 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:47:05.633858  304033 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-527950 --name addons-527950 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-527950 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-527950 --network addons-527950 --ip 192.168.49.2 --volume addons-527950:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1009 18:47:05.968005  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Running}}
	I1009 18:47:05.987768  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:06.015124  304033 cli_runner.go:164] Run: docker exec addons-527950 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:47:06.082190  304033 oci.go:144] the created container "addons-527950" has a running status.
	I1009 18:47:06.082218  304033 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa...
	I1009 18:47:06.614569  304033 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:47:06.649674  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:06.670077  304033 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:47:06.670102  304033 kic_runner.go:114] Args: [docker exec --privileged addons-527950 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:47:06.744206  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:06.768546  304033 machine.go:93] provisionDockerMachine start ...
	I1009 18:47:06.768651  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:06.788458  304033 main.go:141] libmachine: Using SSH client type: native
	I1009 18:47:06.789698  304033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1009 18:47:06.789716  304033 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:47:06.943346  304033 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-527950
	
	I1009 18:47:06.943413  304033 ubuntu.go:169] provisioning hostname "addons-527950"
	I1009 18:47:06.943516  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:06.966001  304033 main.go:141] libmachine: Using SSH client type: native
	I1009 18:47:06.966250  304033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1009 18:47:06.966271  304033 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-527950 && echo "addons-527950" | sudo tee /etc/hostname
	I1009 18:47:07.118527  304033 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-527950
	
	I1009 18:47:07.118612  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:07.134968  304033 main.go:141] libmachine: Using SSH client type: native
	I1009 18:47:07.135213  304033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1009 18:47:07.135239  304033 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-527950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-527950/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-527950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:47:07.263840  304033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:47:07.263870  304033 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19780-297764/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-297764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-297764/.minikube}
	I1009 18:47:07.263901  304033 ubuntu.go:177] setting up certificates
	I1009 18:47:07.263920  304033 provision.go:84] configureAuth start
	I1009 18:47:07.263987  304033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-527950
	I1009 18:47:07.280805  304033 provision.go:143] copyHostCerts
	I1009 18:47:07.280887  304033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-297764/.minikube/ca.pem (1078 bytes)
	I1009 18:47:07.281018  304033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-297764/.minikube/cert.pem (1123 bytes)
	I1009 18:47:07.281074  304033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-297764/.minikube/key.pem (1675 bytes)
	I1009 18:47:07.281116  304033 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-297764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca-key.pem org=jenkins.addons-527950 san=[127.0.0.1 192.168.49.2 addons-527950 localhost minikube]
	I1009 18:47:07.505349  304033 provision.go:177] copyRemoteCerts
	I1009 18:47:07.505420  304033 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:47:07.505462  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:07.522143  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:07.616808  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:47:07.642666  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 18:47:07.670787  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:47:07.695667  304033 provision.go:87] duration metric: took 431.722087ms to configureAuth
	I1009 18:47:07.695694  304033 ubuntu.go:193] setting minikube options for container-runtime
	I1009 18:47:07.695902  304033 config.go:182] Loaded profile config "addons-527950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:47:07.696007  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:07.712759  304033 main.go:141] libmachine: Using SSH client type: native
	I1009 18:47:07.713002  304033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1009 18:47:07.713023  304033 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:47:07.940948  304033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:47:07.940973  304033 machine.go:96] duration metric: took 1.172408163s to provisionDockerMachine
	I1009 18:47:07.940983  304033 client.go:171] duration metric: took 9.373164811s to LocalClient.Create
	I1009 18:47:07.941004  304033 start.go:167] duration metric: took 9.373235103s to libmachine.API.Create "addons-527950"
	I1009 18:47:07.941011  304033 start.go:293] postStartSetup for "addons-527950" (driver="docker")
	I1009 18:47:07.941028  304033 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:47:07.941100  304033 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:47:07.941147  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:07.963555  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:08.061537  304033 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:47:08.064996  304033 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:47:08.065031  304033 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 18:47:08.065042  304033 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 18:47:08.065050  304033 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1009 18:47:08.065062  304033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-297764/.minikube/addons for local assets ...
	I1009 18:47:08.065138  304033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-297764/.minikube/files for local assets ...
	I1009 18:47:08.065167  304033 start.go:296] duration metric: took 124.149363ms for postStartSetup
	I1009 18:47:08.065491  304033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-527950
	I1009 18:47:08.082273  304033 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/config.json ...
	I1009 18:47:08.082606  304033 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:47:08.082664  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:08.100526  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:08.188631  304033 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:47:08.193048  304033 start.go:128] duration metric: took 9.628410948s to createHost
	I1009 18:47:08.193072  304033 start.go:83] releasing machines lock for "addons-527950", held for 9.62854039s
	I1009 18:47:08.193145  304033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-527950
	I1009 18:47:08.210240  304033 ssh_runner.go:195] Run: cat /version.json
	I1009 18:47:08.210295  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:08.210311  304033 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:47:08.210383  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:08.231489  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:08.231668  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:08.319213  304033 ssh_runner.go:195] Run: systemctl --version
	I1009 18:47:08.458240  304033 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:47:08.598441  304033 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:47:08.602575  304033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:47:08.624818  304033 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1009 18:47:08.624957  304033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:47:08.657848  304033 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 18:47:08.657914  304033 start.go:495] detecting cgroup driver to use...
	I1009 18:47:08.657962  304033 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 18:47:08.658030  304033 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:47:08.674220  304033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:47:08.686090  304033 docker.go:217] disabling cri-docker service (if available) ...
	I1009 18:47:08.686157  304033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:47:08.700605  304033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:47:08.715393  304033 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:47:08.805782  304033 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:47:08.893036  304033 docker.go:233] disabling docker service ...
	I1009 18:47:08.893103  304033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:47:08.914155  304033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:47:08.925184  304033 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:47:09.013325  304033 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:47:09.106091  304033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:47:09.118765  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:47:09.135628  304033 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 18:47:09.135701  304033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.145458  304033 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:47:09.145526  304033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.155100  304033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.164516  304033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.173943  304033 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:47:09.183027  304033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.192532  304033 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.208097  304033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:47:09.217707  304033 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:47:09.226046  304033 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:47:09.234429  304033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:47:09.317751  304033 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:47:09.427421  304033 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:47:09.427502  304033 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:47:09.431177  304033 start.go:563] Will wait 60s for crictl version
	I1009 18:47:09.431241  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:47:09.434626  304033 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:47:09.469924  304033 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1009 18:47:09.470036  304033 ssh_runner.go:195] Run: crio --version
	I1009 18:47:09.508234  304033 ssh_runner.go:195] Run: crio --version
	I1009 18:47:09.548641  304033 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1009 18:47:09.550467  304033 cli_runner.go:164] Run: docker network inspect addons-527950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:47:09.565652  304033 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:47:09.569090  304033 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:47:09.579550  304033 kubeadm.go:883] updating cluster {Name:addons-527950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-527950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:47:09.579667  304033 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:47:09.579728  304033 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:47:09.659380  304033 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:47:09.659407  304033 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:47:09.659463  304033 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:47:09.695675  304033 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:47:09.695698  304033 cache_images.go:84] Images are preloaded, skipping loading
	I1009 18:47:09.695707  304033 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1009 18:47:09.695806  304033 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-527950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-527950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:47:09.695909  304033 ssh_runner.go:195] Run: crio config
	I1009 18:47:09.742122  304033 cni.go:84] Creating CNI manager for ""
	I1009 18:47:09.742148  304033 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:47:09.742162  304033 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 18:47:09.742185  304033 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-527950 NodeName:addons-527950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:47:09.742333  304033 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-527950"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:47:09.742408  304033 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 18:47:09.751565  304033 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 18:47:09.751659  304033 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:47:09.760598  304033 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 18:47:09.779421  304033 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:47:09.797672  304033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1009 18:47:09.815460  304033 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:47:09.818703  304033 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:47:09.829474  304033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:47:09.916676  304033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:47:09.931059  304033 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950 for IP: 192.168.49.2
	I1009 18:47:09.931135  304033 certs.go:194] generating shared ca certs ...
	I1009 18:47:09.931172  304033 certs.go:226] acquiring lock for ca certs: {Name:mk418a701df590b3680a6c2f2b51a4efe8f18158 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:09.931353  304033 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-297764/.minikube/ca.key
	I1009 18:47:10.138892  304033 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-297764/.minikube/ca.crt ...
	I1009 18:47:10.138928  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/ca.crt: {Name:mka573a2390739d804ee8d59f4a43e86b90264a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:10.139596  304033 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-297764/.minikube/ca.key ...
	I1009 18:47:10.139618  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/ca.key: {Name:mk7da466fe49415e1687db949c3a1f708289c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:10.139760  304033 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.key
	I1009 18:47:11.164386  304033 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.crt ...
	I1009 18:47:11.164417  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.crt: {Name:mkec4d80a4ab0fb9ee287b7e7a4f7ac45a446127 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:11.164608  304033 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.key ...
	I1009 18:47:11.164621  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.key: {Name:mke0ef870817ab2bcee921a0ff5cb39c33e6eef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:11.164706  304033 certs.go:256] generating profile certs ...
	I1009 18:47:11.164762  304033 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.key
	I1009 18:47:11.164787  304033 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt with IP's: []
	I1009 18:47:11.436858  304033 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt ...
	I1009 18:47:11.436897  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: {Name:mkbcb7ac22e740f2304a6be4c2633cb0af076ea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:11.437140  304033 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.key ...
	I1009 18:47:11.437159  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.key: {Name:mk67eac7ee05a3a1a7a6380796dbbe334f8625f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:11.437249  304033 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.key.686f537e
	I1009 18:47:11.437270  304033 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.crt.686f537e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:47:11.662461  304033 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.crt.686f537e ...
	I1009 18:47:11.662491  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.crt.686f537e: {Name:mkf0f88aa72f17d15a6129e5ca54a443493db4ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:11.662694  304033 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.key.686f537e ...
	I1009 18:47:11.662715  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.key.686f537e: {Name:mk3577c4c7a4f5fac0afb1a3e6e7d40d0beb502b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:11.662802  304033 certs.go:381] copying /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.crt.686f537e -> /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.crt
	I1009 18:47:11.662923  304033 certs.go:385] copying /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.key.686f537e -> /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.key
	I1009 18:47:11.663053  304033 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.key
	I1009 18:47:11.663085  304033 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.crt with IP's: []
	I1009 18:47:12.389711  304033 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.crt ...
	I1009 18:47:12.389747  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.crt: {Name:mkd26c072f0e9883918450a800bb3a8b4f91aa9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:12.389939  304033 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.key ...
	I1009 18:47:12.389953  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.key: {Name:mk83abffff6bed48819c60b9bcb07a45468162ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:12.390146  304033 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:47:12.390194  304033 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:47:12.390221  304033 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:47:12.390251  304033 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-297764/.minikube/certs/key.pem (1675 bytes)
	I1009 18:47:12.390850  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:47:12.416462  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 18:47:12.441202  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:47:12.465154  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:47:12.490047  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 18:47:12.515975  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:47:12.540565  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:47:12.565887  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:47:12.591023  304033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-297764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:47:12.614949  304033 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:47:12.633234  304033 ssh_runner.go:195] Run: openssl version
	I1009 18:47:12.638621  304033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:47:12.647993  304033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:47:12.651338  304033 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:47 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:47:12.651402  304033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:47:12.658247  304033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:47:12.667592  304033 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:47:12.670772  304033 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:47:12.670859  304033 kubeadm.go:392] StartCluster: {Name:addons-527950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-527950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:47:12.670941  304033 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:47:12.671004  304033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:47:12.707540  304033 cri.go:89] found id: ""
	I1009 18:47:12.707664  304033 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:47:12.716580  304033 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:47:12.725484  304033 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:47:12.725561  304033 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:47:12.734565  304033 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:47:12.734587  304033 kubeadm.go:157] found existing configuration files:
	
	I1009 18:47:12.734657  304033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:47:12.743660  304033 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:47:12.743730  304033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:47:12.751953  304033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:47:12.761032  304033 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:47:12.761098  304033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:47:12.769308  304033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:47:12.777927  304033 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:47:12.778022  304033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:47:12.786608  304033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:47:12.795604  304033 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:47:12.795702  304033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:47:12.804210  304033 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:47:12.844186  304033 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 18:47:12.844508  304033 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 18:47:12.863957  304033 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:47:12.864059  304033 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1009 18:47:12.864127  304033 kubeadm.go:310] OS: Linux
	I1009 18:47:12.864201  304033 kubeadm.go:310] CGROUPS_CPU: enabled
	I1009 18:47:12.864275  304033 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1009 18:47:12.864344  304033 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1009 18:47:12.864416  304033 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1009 18:47:12.864482  304033 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1009 18:47:12.864554  304033 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1009 18:47:12.864620  304033 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1009 18:47:12.864684  304033 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1009 18:47:12.864751  304033 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1009 18:47:12.925312  304033 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:47:12.925455  304033 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:47:12.926070  304033 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:47:12.932503  304033 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:47:12.937401  304033 out.go:235]   - Generating certificates and keys ...
	I1009 18:47:12.937609  304033 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 18:47:12.937714  304033 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 18:47:13.481383  304033 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:47:14.392000  304033 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:47:15.133416  304033 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:47:15.917613  304033 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 18:47:16.312386  304033 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 18:47:16.312676  304033 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-527950 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:47:16.944460  304033 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 18:47:16.944622  304033 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-527950 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:47:17.709146  304033 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:47:18.371425  304033 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:47:18.857265  304033 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 18:47:18.857518  304033 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:47:19.166994  304033 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:47:19.841743  304033 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:47:20.047368  304033 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:47:20.554711  304033 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:47:20.833789  304033 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:47:20.834499  304033 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:47:20.837485  304033 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:47:20.839755  304033 out.go:235]   - Booting up control plane ...
	I1009 18:47:20.839880  304033 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:47:20.839963  304033 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:47:20.841068  304033 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:47:20.851332  304033 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:47:20.857713  304033 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:47:20.857767  304033 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 18:47:20.950657  304033 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:47:20.950786  304033 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:47:22.953242  304033 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.002484857s
	I1009 18:47:22.953336  304033 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 18:47:29.454607  304033 kubeadm.go:310] [api-check] The API server is healthy after 6.50176464s
	I1009 18:47:29.473613  304033 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 18:47:29.492004  304033 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 18:47:29.516257  304033 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 18:47:29.516457  304033 kubeadm.go:310] [mark-control-plane] Marking the node addons-527950 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 18:47:29.529069  304033 kubeadm.go:310] [bootstrap-token] Using token: 4sfs59.wjhnvipwche2c0ya
	I1009 18:47:29.532782  304033 out.go:235]   - Configuring RBAC rules ...
	I1009 18:47:29.532924  304033 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 18:47:29.537002  304033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 18:47:29.547550  304033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 18:47:29.551787  304033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 18:47:29.557519  304033 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 18:47:29.562768  304033 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 18:47:29.861682  304033 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 18:47:30.343911  304033 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 18:47:30.860793  304033 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 18:47:30.864121  304033 kubeadm.go:310] 
	I1009 18:47:30.864209  304033 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 18:47:30.864222  304033 kubeadm.go:310] 
	I1009 18:47:30.864308  304033 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 18:47:30.864316  304033 kubeadm.go:310] 
	I1009 18:47:30.864342  304033 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 18:47:30.864400  304033 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 18:47:30.864459  304033 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 18:47:30.864464  304033 kubeadm.go:310] 
	I1009 18:47:30.864519  304033 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 18:47:30.864528  304033 kubeadm.go:310] 
	I1009 18:47:30.864575  304033 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 18:47:30.864583  304033 kubeadm.go:310] 
	I1009 18:47:30.864635  304033 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 18:47:30.864714  304033 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 18:47:30.864785  304033 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 18:47:30.864793  304033 kubeadm.go:310] 
	I1009 18:47:30.864877  304033 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 18:47:30.864956  304033 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 18:47:30.864964  304033 kubeadm.go:310] 
	I1009 18:47:30.865048  304033 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4sfs59.wjhnvipwche2c0ya \
	I1009 18:47:30.865153  304033 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33c056a8235ede20aa813560942c562368f7b9dea0d47cbab7f3fe3a61439fce \
	I1009 18:47:30.865176  304033 kubeadm.go:310] 	--control-plane 
	I1009 18:47:30.865185  304033 kubeadm.go:310] 
	I1009 18:47:30.865270  304033 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 18:47:30.865278  304033 kubeadm.go:310] 
	I1009 18:47:30.865359  304033 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4sfs59.wjhnvipwche2c0ya \
	I1009 18:47:30.865464  304033 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33c056a8235ede20aa813560942c562368f7b9dea0d47cbab7f3fe3a61439fce 
	I1009 18:47:30.867225  304033 kubeadm.go:310] W1009 18:47:12.840620    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:47:30.867578  304033 kubeadm.go:310] W1009 18:47:12.841651    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:47:30.867903  304033 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1009 18:47:30.868043  304033 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:47:30.868079  304033 cni.go:84] Creating CNI manager for ""
	I1009 18:47:30.868091  304033 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:47:30.870211  304033 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 18:47:30.872572  304033 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 18:47:30.876374  304033 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1009 18:47:30.876395  304033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 18:47:30.895222  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 18:47:31.173339  304033 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 18:47:31.173498  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:31.173600  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-527950 minikube.k8s.io/updated_at=2024_10_09T18_47_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=addons-527950 minikube.k8s.io/primary=true
	I1009 18:47:31.300200  304033 ops.go:34] apiserver oom_adj: -16
	I1009 18:47:31.300314  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:31.800940  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:32.300445  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:32.800428  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:33.301071  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:33.800468  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:34.300448  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:34.801021  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:35.300748  304033 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:35.430120  304033 kubeadm.go:1113] duration metric: took 4.256673073s to wait for elevateKubeSystemPrivileges
	I1009 18:47:35.430152  304033 kubeadm.go:394] duration metric: took 22.759297617s to StartCluster
	I1009 18:47:35.430170  304033 settings.go:142] acquiring lock: {Name:mk94c15161ad7dabfbd54a7b84d6e9487d964391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:35.430286  304033 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-297764/kubeconfig
	I1009 18:47:35.430727  304033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-297764/kubeconfig: {Name:mk805654b0a3d9c829b5d3a4422736c8bd907781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:35.430933  304033 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:47:35.431081  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 18:47:35.431337  304033 config.go:182] Loaded profile config "addons-527950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:47:35.431378  304033 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1009 18:47:35.431469  304033 addons.go:69] Setting yakd=true in profile "addons-527950"
	I1009 18:47:35.431488  304033 addons.go:234] Setting addon yakd=true in "addons-527950"
	I1009 18:47:35.431515  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.432042  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.432604  304033 addons.go:69] Setting metrics-server=true in profile "addons-527950"
	I1009 18:47:35.432627  304033 addons.go:234] Setting addon metrics-server=true in "addons-527950"
	I1009 18:47:35.432663  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.433105  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.437557  304033 addons.go:69] Setting cloud-spanner=true in profile "addons-527950"
	I1009 18:47:35.437601  304033 addons.go:234] Setting addon cloud-spanner=true in "addons-527950"
	I1009 18:47:35.437639  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.438276  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.438732  304033 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-527950"
	I1009 18:47:35.438783  304033 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-527950"
	I1009 18:47:35.438819  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.438962  304033 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-527950"
	I1009 18:47:35.439093  304033 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-527950"
	I1009 18:47:35.439145  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.439228  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.440259  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.445233  304033 addons.go:69] Setting default-storageclass=true in profile "addons-527950"
	I1009 18:47:35.445276  304033 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-527950"
	I1009 18:47:35.445648  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.455926  304033 addons.go:69] Setting registry=true in profile "addons-527950"
	I1009 18:47:35.455958  304033 addons.go:234] Setting addon registry=true in "addons-527950"
	I1009 18:47:35.455995  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.456021  304033 addons.go:69] Setting gcp-auth=true in profile "addons-527950"
	I1009 18:47:35.456052  304033 mustload.go:65] Loading cluster: addons-527950
	I1009 18:47:35.456221  304033 config.go:182] Loaded profile config "addons-527950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:47:35.456452  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.456455  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.460626  304033 addons.go:69] Setting ingress=true in profile "addons-527950"
	I1009 18:47:35.491970  304033 addons.go:234] Setting addon ingress=true in "addons-527950"
	I1009 18:47:35.492033  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.492553  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.460856  304033 addons.go:69] Setting ingress-dns=true in profile "addons-527950"
	I1009 18:47:35.507970  304033 addons.go:234] Setting addon ingress-dns=true in "addons-527950"
	I1009 18:47:35.508024  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.508498  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.460877  304033 addons.go:69] Setting inspektor-gadget=true in profile "addons-527950"
	I1009 18:47:35.535594  304033 addons.go:234] Setting addon inspektor-gadget=true in "addons-527950"
	I1009 18:47:35.535649  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.541439  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.547661  304033 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1009 18:47:35.460927  304033 out.go:177] * Verifying Kubernetes components...
	I1009 18:47:35.474497  304033 addons.go:69] Setting storage-provisioner=true in profile "addons-527950"
	I1009 18:47:35.549392  304033 addons.go:234] Setting addon storage-provisioner=true in "addons-527950"
	I1009 18:47:35.549437  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.474516  304033 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-527950"
	I1009 18:47:35.553791  304033 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-527950"
	I1009 18:47:35.554141  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.554636  304033 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1009 18:47:35.556852  304033 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 18:47:35.556916  304033 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 18:47:35.557012  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.474526  304033 addons.go:69] Setting volcano=true in profile "addons-527950"
	I1009 18:47:35.557284  304033 addons.go:234] Setting addon volcano=true in "addons-527950"
	I1009 18:47:35.557324  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.557771  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.474533  304033 addons.go:69] Setting volumesnapshots=true in profile "addons-527950"
	I1009 18:47:35.561582  304033 addons.go:234] Setting addon volumesnapshots=true in "addons-527950"
	I1009 18:47:35.561624  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.562120  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.577215  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.598840  304033 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1009 18:47:35.601445  304033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:47:35.603819  304033 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1009 18:47:35.603901  304033 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1009 18:47:35.603980  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.629042  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.652203  304033 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1009 18:47:35.652224  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 18:47:35.652287  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.655035  304033 addons.go:234] Setting addon default-storageclass=true in "addons-527950"
	I1009 18:47:35.655120  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.655655  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.699635  304033 out.go:177]   - Using image docker.io/registry:2.8.3
	I1009 18:47:35.707558  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 18:47:35.707665  304033 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	W1009 18:47:35.708249  304033 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1009 18:47:35.699811  304033 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1009 18:47:35.699928  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:35.748939  304033 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1009 18:47:35.749224  304033 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:47:35.749258  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 18:47:35.749351  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.760596  304033 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:35.762509  304033 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:47:35.762527  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1009 18:47:35.762594  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.767049  304033 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1009 18:47:35.767977  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 18:47:35.768987  304033 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 18:47:35.769736  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1009 18:47:35.769812  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.789059  304033 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:35.792105  304033 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:47:35.792132  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1009 18:47:35.792194  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.810324  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:35.811245  304033 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-527950"
	I1009 18:47:35.812199  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:35.812641  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:35.816111  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1009 18:47:35.816255  304033 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1009 18:47:35.816495  304033 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:47:35.816657  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 18:47:35.820213  304033 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:47:35.820233  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:47:35.820295  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.824768  304033 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1009 18:47:35.824800  304033 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1009 18:47:35.824879  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.830715  304033 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 18:47:35.830739  304033 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 18:47:35.830802  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.867965  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 18:47:35.873671  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 18:47:35.877495  304033 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:47:35.877513  304033 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:47:35.877572  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.878052  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:35.881756  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 18:47:35.883886  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:35.888781  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 18:47:35.893536  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 18:47:35.896899  304033 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 18:47:35.900401  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 18:47:35.900426  304033 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 18:47:35.900502  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:35.963345  304033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:47:35.993783  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:35.993790  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.011792  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.031894  304033 out.go:177]   - Using image docker.io/busybox:stable
	I1009 18:47:36.032063  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.032923  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.037995  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.039874  304033 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 18:47:36.041584  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.043781  304033 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:47:36.043805  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 18:47:36.044367  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:36.052831  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:36.081159  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	W1009 18:47:36.082186  304033 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1009 18:47:36.082214  304033 retry.go:31] will retry after 222.285673ms: ssh: handshake failed: EOF
	I1009 18:47:36.199036  304033 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1009 18:47:36.199062  304033 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1009 18:47:36.208965  304033 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 18:47:36.209035  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 18:47:36.364361  304033 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1009 18:47:36.364426  304033 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1009 18:47:36.373759  304033 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 18:47:36.373826  304033 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 18:47:36.374020  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 18:47:36.427399  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:47:36.468037  304033 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 18:47:36.468104  304033 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 18:47:36.499014  304033 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1009 18:47:36.499087  304033 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1009 18:47:36.522134  304033 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:47:36.522214  304033 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 18:47:36.561569  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:47:36.574480  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:47:36.584795  304033 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1009 18:47:36.584867  304033 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1009 18:47:36.595897  304033 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:47:36.595966  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 18:47:36.599286  304033 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 18:47:36.599358  304033 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 18:47:36.625695  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:47:36.687502  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 18:47:36.687575  304033 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 18:47:36.699072  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:47:36.719366  304033 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1009 18:47:36.719445  304033 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1009 18:47:36.748332  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:47:36.764943  304033 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:47:36.765015  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1009 18:47:36.804387  304033 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 18:47:36.804467  304033 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 18:47:36.807429  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:47:36.823288  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:47:36.901413  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 18:47:36.901490  304033 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 18:47:36.940833  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:47:36.949854  304033 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1009 18:47:36.949935  304033 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1009 18:47:36.994069  304033 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 18:47:36.994141  304033 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 18:47:37.085150  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 18:47:37.085224  304033 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 18:47:37.109687  304033 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1009 18:47:37.109762  304033 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1009 18:47:37.176605  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 18:47:37.176680  304033 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 18:47:37.303942  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 18:47:37.303969  304033 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 18:47:37.312257  304033 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1009 18:47:37.312279  304033 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1009 18:47:37.407328  304033 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:37.407396  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 18:47:37.474437  304033 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 18:47:37.474509  304033 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 18:47:37.482393  304033 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1009 18:47:37.482472  304033 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1009 18:47:37.542596  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:37.566059  304033 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 18:47:37.566132  304033 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1009 18:47:37.609938  304033 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:47:37.610061  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1009 18:47:37.619088  304033 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 18:47:37.619159  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 18:47:37.653242  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:47:37.716441  304033 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 18:47:37.716516  304033 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 18:47:37.811362  304033 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 18:47:37.811435  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 18:47:37.936380  304033 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.047554904s)
	I1009 18:47:37.936458  304033 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1009 18:47:37.936938  304033 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.973513522s)
	I1009 18:47:37.938829  304033 node_ready.go:35] waiting up to 6m0s for node "addons-527950" to be "Ready" ...
	I1009 18:47:37.952478  304033 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 18:47:37.952554  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 18:47:38.115304  304033 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:47:38.115383  304033 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 18:47:38.229987  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:47:39.146308  304033 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-527950" context rescaled to 1 replicas
	I1009 18:47:40.102649  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.728585261s)
	I1009 18:47:40.102724  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.675263753s)
	I1009 18:47:40.180829  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:42.457778  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:42.601832  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.97606695s)
	I1009 18:47:42.601897  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.902758069s)
	I1009 18:47:42.601954  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.853551581s)
	I1009 18:47:42.601965  304033 addons.go:475] Verifying addon metrics-server=true in "addons-527950"
	I1009 18:47:42.602009  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.794523923s)
	I1009 18:47:42.602221  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.778862677s)
	I1009 18:47:42.602237  304033 addons.go:475] Verifying addon registry=true in "addons-527950"
	I1009 18:47:42.601772  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.027206149s)
	I1009 18:47:42.602598  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.041004638s)
	I1009 18:47:42.602744  304033 addons.go:475] Verifying addon ingress=true in "addons-527950"
	I1009 18:47:42.602909  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.661994803s)
	I1009 18:47:42.604456  304033 out.go:177] * Verifying registry addon...
	I1009 18:47:42.604536  304033 out.go:177] * Verifying ingress addon...
	I1009 18:47:42.605567  304033 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-527950 service yakd-dashboard -n yakd-dashboard
	
	I1009 18:47:42.609434  304033 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 18:47:42.609600  304033 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 18:47:42.621308  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.967975579s)
	I1009 18:47:42.621365  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.078690089s)
	W1009 18:47:42.621393  304033 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:47:42.621415  304033 retry.go:31] will retry after 269.402403ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:47:42.626872  304033 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:47:42.626905  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:42.628010  304033 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 18:47:42.628028  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 18:47:42.651712  304033 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I1009 18:47:42.891579  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:42.960372  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.730287478s)
	I1009 18:47:42.960410  304033 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-527950"
	I1009 18:47:42.962711  304033 out.go:177] * Verifying csi-hostpath-driver addon...
	I1009 18:47:42.965763  304033 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 18:47:42.979267  304033 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:47:42.979292  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:43.139236  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:43.140236  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:43.470438  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:43.616870  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:43.618481  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:43.839059  304033 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 18:47:43.839173  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:43.856861  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:43.969698  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:43.992864  304033 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 18:47:44.018568  304033 addons.go:234] Setting addon gcp-auth=true in "addons-527950"
	I1009 18:47:44.018671  304033 host.go:66] Checking if "addons-527950" exists ...
	I1009 18:47:44.019158  304033 cli_runner.go:164] Run: docker container inspect addons-527950 --format={{.State.Status}}
	I1009 18:47:44.037564  304033 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 18:47:44.037623  304033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-527950
	I1009 18:47:44.069543  304033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/addons-527950/id_rsa Username:docker}
	I1009 18:47:44.113761  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:44.114874  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:44.470256  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:44.615372  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:44.616721  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:44.942531  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:44.970874  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:45.116004  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:45.116306  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:45.469442  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:45.614632  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:45.616676  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:45.974353  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:46.114015  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:46.115006  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:46.129856  304033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.238210804s)
	I1009 18:47:46.129919  304033 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.092337666s)
	I1009 18:47:46.132410  304033 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:46.134402  304033 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1009 18:47:46.136336  304033 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 18:47:46.136393  304033 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 18:47:46.172872  304033 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 18:47:46.172903  304033 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 18:47:46.202003  304033 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:47:46.202030  304033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1009 18:47:46.238775  304033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:47:46.473442  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:46.613636  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:46.614400  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:46.957671  304033 addons.go:475] Verifying addon gcp-auth=true in "addons-527950"
	I1009 18:47:46.959941  304033 out.go:177] * Verifying gcp-auth addon...
	I1009 18:47:46.962772  304033 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 18:47:46.977540  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:46.989545  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:46.990127  304033 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 18:47:46.990151  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:47.114479  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:47.115561  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:47.477865  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:47.478597  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:47.613231  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:47.614694  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:47.966672  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:47.969432  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:48.113365  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:48.114315  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:48.466814  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:48.469593  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:48.613729  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:48.614145  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:48.968566  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:48.970635  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:49.113442  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:49.114331  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:49.442728  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:49.466179  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:49.468907  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:49.613933  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:49.614285  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:49.967573  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:49.972595  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:50.113657  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:50.115138  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:50.466322  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:50.468962  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:50.612790  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:50.613783  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:50.967263  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:50.969774  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:51.114889  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:51.115247  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:51.442822  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:51.466666  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:51.468827  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:51.614136  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:51.614331  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:51.966997  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:51.970021  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:52.114659  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:52.115518  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:52.466814  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:52.469046  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:52.614462  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:52.614746  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:52.966528  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:52.970045  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:53.112865  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:53.113615  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:53.465945  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:53.468884  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:53.613759  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:53.614632  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:53.941902  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:53.967645  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:53.968981  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:54.114257  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:54.115106  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:54.467158  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:54.469299  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:54.614096  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:54.614927  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:54.967180  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:54.970110  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:55.114055  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:55.115130  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:55.466702  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:55.469774  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:55.613861  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:55.614708  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:55.942373  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:55.966511  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:55.970194  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:56.113906  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:56.114601  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:56.466046  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:56.468704  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:56.613347  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:56.614168  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:56.969257  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:56.971760  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:57.113796  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:57.114319  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:57.467387  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:57.469141  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:57.614928  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:57.615500  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:57.942714  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:57.965940  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:57.969600  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:58.113472  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:58.114275  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:58.465713  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:58.469119  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:58.613518  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:58.613841  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:58.965892  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:58.968750  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:59.113337  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:59.113850  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:59.466957  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:59.469562  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:59.613320  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:59.613978  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:59.942768  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:47:59.967569  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:59.971061  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:00.167947  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:00.168299  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:00.475238  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:00.477770  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:00.615765  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:00.620867  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:00.967264  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:00.970258  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:01.113930  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:01.115036  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:01.466741  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:01.468963  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:01.613896  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:01.614551  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:01.966250  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:01.969250  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:02.113195  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:02.114413  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:02.442717  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:02.465948  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:02.468968  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:02.613203  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:02.613923  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:02.967326  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:02.969572  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:03.113367  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:03.114327  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:03.466682  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:03.469080  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:03.613080  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:03.613915  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:03.966623  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:03.969605  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:04.113982  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:04.114887  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:04.466929  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:04.470224  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:04.614190  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:04.615130  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:04.942842  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:04.966423  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:04.969853  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:05.114104  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:05.114335  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:05.465693  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:05.469687  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:05.613413  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:05.614326  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:05.966861  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:05.970488  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:06.114240  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:06.115311  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:06.467075  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:06.470123  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:06.613196  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:06.614024  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:06.969182  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:06.971148  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:07.113815  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:07.114686  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:07.442752  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:07.467399  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:07.470071  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:07.613678  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:07.614430  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:07.967516  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:07.969900  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:08.114073  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:08.115136  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:08.466317  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:08.468616  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:08.613467  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:08.615109  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:08.968147  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:08.970309  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:09.114065  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:09.114825  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:09.466716  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:09.469304  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:09.615986  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:09.616958  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:09.942863  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:09.968057  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:09.969438  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:10.113870  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:10.114797  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:10.468148  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:10.469537  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:10.614109  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:10.614993  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:10.967157  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:10.969626  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:11.113810  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:11.114729  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:11.467523  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:11.469466  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:11.612898  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:11.614466  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:11.966755  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:11.969835  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:12.113427  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:12.114395  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:12.441855  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:12.466663  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:12.469544  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:12.613117  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:12.614171  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:12.966560  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:12.968940  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:13.112864  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:13.114070  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:13.465995  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:13.468812  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:13.613216  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:13.614390  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:13.967516  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:13.969698  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:14.113016  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:14.113858  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:14.442350  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:14.466082  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:14.469065  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:14.613566  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:14.615728  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:14.974253  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:14.975329  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:15.116202  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:15.117112  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:15.466829  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:15.468926  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:15.613714  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:15.614708  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:15.965993  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:15.968902  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:16.114033  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:16.114290  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:16.444200  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:16.465705  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:16.469299  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:16.613804  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:16.614653  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:16.968584  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:16.971528  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:17.113154  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:17.113955  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:17.466677  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:17.469587  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:17.613858  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:17.614601  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:17.967590  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:17.969598  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:18.113552  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:18.113793  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:18.466613  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:18.469301  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:18.613432  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:18.614029  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:18.942730  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:18.966919  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:18.969509  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:19.112858  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:19.113727  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:19.467124  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:19.469314  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:19.613988  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:19.615103  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:19.966223  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:19.969805  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:20.114050  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:20.115670  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:20.466334  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:20.468932  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:20.613720  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:20.614515  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:20.966996  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:20.970341  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:21.113529  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:21.124309  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:21.442365  304033 node_ready.go:53] node "addons-527950" has status "Ready":"False"
	I1009 18:48:21.466443  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:21.469282  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:21.615028  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:21.615288  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:21.967175  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:21.970617  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:22.123818  304033 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:48:22.123872  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:22.128301  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:22.496694  304033 node_ready.go:49] node "addons-527950" has status "Ready":"True"
	I1009 18:48:22.496721  304033 node_ready.go:38] duration metric: took 44.55782189s for node "addons-527950" to be "Ready" ...
	I1009 18:48:22.496733  304033 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:48:22.543655  304033 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:48:22.543681  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:22.544075  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:22.551643  304033 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6xlwc" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:22.642147  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:22.644009  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:22.971789  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:22.973411  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:23.114385  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:23.115616  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:23.466600  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:23.471264  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:23.618141  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:23.619694  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:23.967422  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:23.970910  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:24.122271  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:24.123549  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:24.467212  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:24.472441  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:24.558439  304033 pod_ready.go:103] pod "coredns-7c65d6cfc9-6xlwc" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:24.612945  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:24.615087  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:24.967854  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:24.970633  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:25.063133  304033 pod_ready.go:93] pod "coredns-7c65d6cfc9-6xlwc" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:25.063160  304033 pod_ready.go:82] duration metric: took 2.511477514s for pod "coredns-7c65d6cfc9-6xlwc" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.063187  304033 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.068741  304033 pod_ready.go:93] pod "etcd-addons-527950" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:25.068768  304033 pod_ready.go:82] duration metric: took 5.571262ms for pod "etcd-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.068785  304033 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.075151  304033 pod_ready.go:93] pod "kube-apiserver-addons-527950" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:25.075181  304033 pod_ready.go:82] duration metric: took 6.386404ms for pod "kube-apiserver-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.075195  304033 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.081391  304033 pod_ready.go:93] pod "kube-controller-manager-addons-527950" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:25.081416  304033 pod_ready.go:82] duration metric: took 6.213427ms for pod "kube-controller-manager-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.081432  304033 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ffxxn" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.087591  304033 pod_ready.go:93] pod "kube-proxy-ffxxn" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:25.087621  304033 pod_ready.go:82] duration metric: took 6.159856ms for pod "kube-proxy-ffxxn" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.087635  304033 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.114767  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:25.115893  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:25.456015  304033 pod_ready.go:93] pod "kube-scheduler-addons-527950" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:25.456088  304033 pod_ready.go:82] duration metric: took 368.414383ms for pod "kube-scheduler-addons-527950" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.456116  304033 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:25.468590  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:25.478207  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:25.616983  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:25.619476  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:25.967046  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:25.972924  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:26.115966  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:26.118456  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:26.466523  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:26.473072  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:26.614761  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:26.615252  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:26.976855  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:26.980162  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:27.114736  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:27.115720  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:27.462867  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:27.465819  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:27.470197  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:27.614653  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:27.615507  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:27.968185  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:27.972217  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:28.113956  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:28.114401  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:28.473304  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:28.475807  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:28.615159  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:28.617957  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:28.968100  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:28.978744  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:29.116145  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:29.117082  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:29.463567  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:29.466190  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:29.470341  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:29.613752  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:29.614563  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:29.967968  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:29.970839  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:30.115332  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:30.116714  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:30.471895  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:30.473940  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:30.616029  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:30.616977  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:30.967444  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:30.973307  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:31.116434  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:31.118030  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:31.466384  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:31.470254  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:31.615148  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:31.615662  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:31.962817  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:31.966215  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:31.970714  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:32.114189  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:32.115434  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:32.466167  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:32.470171  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:32.613370  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:32.619230  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:32.966232  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:32.971104  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:33.135767  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:33.136942  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:33.466619  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:33.470194  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:33.614376  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:33.614871  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:33.966762  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:33.975513  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:33.982416  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:34.116387  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:34.118234  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:34.467556  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:34.477536  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:34.616063  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:34.616975  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:34.971940  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:34.977144  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:35.123857  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:35.124361  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:35.466001  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:35.476450  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:35.618428  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:35.620017  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:35.973771  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:35.975509  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:35.976914  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:36.114049  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:36.115166  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:36.468297  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:36.476558  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:36.626259  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:36.626951  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:36.982473  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:36.985420  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:37.116064  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:37.117439  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:37.476216  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:37.479493  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:37.615657  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:37.616680  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:37.978555  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:37.981639  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:38.116509  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:38.120632  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:38.464270  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:38.470039  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:38.473285  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:38.619883  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:38.621187  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:38.967218  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:38.972283  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:39.114823  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:39.116199  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:39.466190  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:39.469860  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:39.613818  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:39.615031  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:39.965790  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:39.975662  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:40.118383  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:40.120679  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:40.472028  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:40.478162  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:40.618451  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:40.621474  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:40.966048  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:40.968112  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:40.972494  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:41.115797  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:41.116788  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:41.467477  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:41.471388  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:41.616084  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:41.618090  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:41.972676  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:41.977405  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:42.115596  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:42.117391  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:42.501410  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:42.504228  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:42.622124  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:42.623663  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:42.973225  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:42.974679  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:43.122560  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:43.124442  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:43.463083  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:43.466647  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:43.480649  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:43.615745  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:43.616815  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:43.968031  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:43.971215  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:44.114166  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:44.115373  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:44.466216  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:44.470464  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:44.613187  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:44.614854  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:44.966284  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:44.971705  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:45.114701  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:45.118863  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:45.463320  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:45.485187  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:45.488022  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:45.614156  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:45.617767  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:45.970260  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:45.973832  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:46.120375  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:46.121351  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:46.467133  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:46.470289  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:46.615136  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:46.616049  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:46.971794  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:46.973136  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:47.113858  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:47.115074  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:47.466024  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:47.469942  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:47.614041  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:47.615594  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:47.963480  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:47.966086  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:47.970697  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:48.115534  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:48.116236  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:48.479231  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:48.481167  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:48.616081  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:48.616640  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:48.978543  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:48.980247  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:49.115752  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:49.117215  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:49.470349  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:49.473745  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:49.638445  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:49.641123  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:49.973937  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:49.998462  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:49.999398  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:50.127646  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:50.128641  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:50.477353  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:50.480643  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:50.620936  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:50.621785  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:50.990362  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:50.992213  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:51.137550  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:51.139750  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:51.480889  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:51.482849  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:51.620294  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:51.621124  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:51.983315  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:51.983734  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:51.985674  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:52.121203  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:52.122227  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:52.488190  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:52.488696  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:52.621348  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:52.622397  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:52.973489  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:52.976280  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:53.114505  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:53.115969  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:53.466643  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:53.470645  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:53.615894  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:53.616144  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:53.967476  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:53.972230  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:54.115365  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:54.115957  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:54.462342  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:54.466182  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:54.470124  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:54.613994  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:54.616086  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:54.965850  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:54.970390  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:55.113821  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:55.114931  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:55.469185  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:55.471083  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:55.613770  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:55.615440  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:55.972049  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:55.976515  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:56.116298  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:56.117385  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:56.464928  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:56.471280  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:56.475782  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:56.616184  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:56.617032  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:56.982278  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:56.983170  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:57.116068  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:57.117641  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:57.497978  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:57.504965  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:57.625883  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:57.627448  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:57.969102  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:57.984271  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:58.115870  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:58.117588  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:58.467741  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:58.474789  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:58.617964  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:58.618952  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:58.965836  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:58.976785  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:58.981718  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:59.116153  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:59.117305  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:59.474720  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:59.478028  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:59.615662  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:59.617257  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:59.980876  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:59.989080  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:00.126005  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:00.142328  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:00.470468  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:00.473293  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:00.616025  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:00.617634  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:00.968026  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:00.971990  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:00.973011  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:01.121647  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:01.123101  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:01.467497  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:01.475416  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:01.618173  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:01.620980  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:01.965661  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:01.982918  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:02.114277  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:02.115325  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:02.469769  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:02.472093  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:02.615019  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:02.615374  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:02.967129  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:02.970216  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:03.114220  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:03.115908  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:03.462865  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:03.466237  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:03.471018  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:03.622491  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:03.629655  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:03.972780  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:03.975673  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:04.117171  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:04.117561  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:04.472146  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:04.474073  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:04.614433  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:04.615674  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:04.972086  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:04.974217  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:05.116010  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:05.116503  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:05.469391  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:05.471296  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:05.614005  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:05.614350  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:05.963099  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:05.965961  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:05.971255  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:06.114343  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:06.115680  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:06.466353  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:06.470068  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:06.615652  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:06.617163  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:06.972561  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:06.974379  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:07.114592  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:07.115275  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:07.492753  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:07.496941  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:07.618481  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:07.618849  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:07.966317  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:07.972605  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:07.977117  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:08.129933  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:08.130767  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:08.470211  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:08.472378  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:08.615561  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:08.616860  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:08.977309  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:08.987299  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:09.117418  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:09.119248  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:09.466130  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:09.470234  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:09.615370  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:09.616323  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:09.970548  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:09.974588  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:09.978156  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:10.149650  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:10.162144  304033 kapi.go:107] duration metric: took 1m27.552529397s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 18:49:10.466728  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:10.470334  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:10.623166  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:10.972282  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:10.972717  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:11.122257  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:11.466486  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:11.470590  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:11.613870  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:11.968700  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:11.972587  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:12.115452  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:12.462544  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:12.466598  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:12.470000  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:12.614574  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:12.974719  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:12.976647  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:13.114622  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:13.467786  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:13.471589  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:13.614735  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:13.967377  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:13.970691  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:14.113756  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:14.466102  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:14.469952  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:14.614393  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:14.962649  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:14.966392  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:14.970430  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:15.117792  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:15.483272  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:15.483761  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:15.618147  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:15.972344  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:15.975980  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:16.118624  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:16.466150  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:16.470295  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:16.614735  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:16.967547  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:16.973534  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:17.114521  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:17.463578  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:17.466495  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:17.470381  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:17.613540  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:17.967749  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:17.971121  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:18.114855  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:18.468498  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:18.472841  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:18.614440  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:18.965755  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:18.970182  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:19.114544  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:19.466601  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:19.470517  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:19.613787  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:19.971116  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:19.980100  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:19.981252  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:20.116225  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:20.476972  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:20.488967  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:20.614027  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:20.974662  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:20.980039  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:21.113935  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:21.468347  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:21.475280  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:21.614197  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:21.968400  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:21.972113  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:22.114431  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:22.462898  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:22.466091  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:22.470570  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:22.614986  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:22.975693  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:22.976806  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:23.114722  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:23.471253  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:23.472928  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:23.613729  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:23.965692  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:23.970459  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:24.114498  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:24.466233  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:24.470198  304033 kapi.go:107] duration metric: took 1m41.504431301s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 18:49:24.613824  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:24.962593  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:24.968152  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:25.114411  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:25.467189  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:25.614302  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:25.967167  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:26.114354  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:26.467106  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:26.615126  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:26.966084  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:26.970775  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:27.114892  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:27.466076  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:27.614055  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:27.973842  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:28.117355  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:28.466671  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:28.616546  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:28.972899  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:29.114866  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:29.462426  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:29.468636  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:29.614527  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:29.971817  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:30.115470  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:30.466964  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:30.613745  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:30.966303  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:31.114688  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:31.474490  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:31.476341  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:31.615551  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:31.968292  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:32.114165  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:32.470183  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:32.614946  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:32.968583  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:33.115306  304033 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:33.487810  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:33.488665  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:33.615421  304033 kapi.go:107] duration metric: took 1m51.00598366s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 18:49:33.966423  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:34.466344  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:34.966599  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:35.468430  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:35.974437  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:35.985140  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:36.466139  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:36.977106  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:37.468165  304033 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:37.968110  304033 kapi.go:107] duration metric: took 1m51.005343288s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 18:49:37.970499  304033 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-527950 cluster.
	I1009 18:49:37.972276  304033 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 18:49:37.973863  304033 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 18:49:37.975623  304033 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1009 18:49:37.977605  304033 addons.go:510] duration metric: took 2m2.546216298s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner metrics-server yakd inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1009 18:49:38.464717  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:40.962940  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:43.462949  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:45.463144  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:47.962131  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:49.962252  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:51.963067  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:54.462586  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:56.462815  304033 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"False"
	I1009 18:49:57.469272  304033 pod_ready.go:93] pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace has status "Ready":"True"
	I1009 18:49:57.469298  304033 pod_ready.go:82] duration metric: took 1m32.013161291s for pod "metrics-server-84c5f94fbc-2rc87" in "kube-system" namespace to be "Ready" ...
	I1009 18:49:57.469312  304033 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-frbq8" in "kube-system" namespace to be "Ready" ...
	I1009 18:49:57.474879  304033 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-frbq8" in "kube-system" namespace has status "Ready":"True"
	I1009 18:49:57.474905  304033 pod_ready.go:82] duration metric: took 5.585212ms for pod "nvidia-device-plugin-daemonset-frbq8" in "kube-system" namespace to be "Ready" ...
	I1009 18:49:57.474927  304033 pod_ready.go:39] duration metric: took 1m34.978144286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:49:57.474942  304033 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:49:57.474972  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:49:57.475034  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:49:57.526169  304033 cri.go:89] found id: "4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06"
	I1009 18:49:57.526247  304033 cri.go:89] found id: ""
	I1009 18:49:57.526262  304033 logs.go:282] 1 containers: [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06]
	I1009 18:49:57.526326  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.529983  304033 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:49:57.530094  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:49:57.571108  304033 cri.go:89] found id: "0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987"
	I1009 18:49:57.571201  304033 cri.go:89] found id: ""
	I1009 18:49:57.571226  304033 logs.go:282] 1 containers: [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987]
	I1009 18:49:57.571318  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.575953  304033 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:49:57.576043  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:49:57.613862  304033 cri.go:89] found id: "a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6"
	I1009 18:49:57.613884  304033 cri.go:89] found id: ""
	I1009 18:49:57.613893  304033 logs.go:282] 1 containers: [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6]
	I1009 18:49:57.613949  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.617666  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:49:57.617739  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:49:57.664132  304033 cri.go:89] found id: "5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f"
	I1009 18:49:57.664161  304033 cri.go:89] found id: ""
	I1009 18:49:57.664170  304033 logs.go:282] 1 containers: [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f]
	I1009 18:49:57.664231  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.668036  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:49:57.668109  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:49:57.716265  304033 cri.go:89] found id: "b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4"
	I1009 18:49:57.716346  304033 cri.go:89] found id: ""
	I1009 18:49:57.716362  304033 logs.go:282] 1 containers: [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4]
	I1009 18:49:57.716421  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.720220  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:49:57.720295  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:49:57.765756  304033 cri.go:89] found id: "d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6"
	I1009 18:49:57.765778  304033 cri.go:89] found id: ""
	I1009 18:49:57.765788  304033 logs.go:282] 1 containers: [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6]
	I1009 18:49:57.765847  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.769430  304033 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:49:57.769554  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:49:57.808879  304033 cri.go:89] found id: "658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4"
	I1009 18:49:57.808902  304033 cri.go:89] found id: ""
	I1009 18:49:57.808910  304033 logs.go:282] 1 containers: [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4]
	I1009 18:49:57.808967  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:49:57.812398  304033 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:49:57.812425  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 18:49:58.016603  304033 logs.go:123] Gathering logs for etcd [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987] ...
	I1009 18:49:58.016637  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987"
	I1009 18:49:58.068774  304033 logs.go:123] Gathering logs for coredns [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6] ...
	I1009 18:49:58.068812  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6"
	I1009 18:49:58.119632  304033 logs.go:123] Gathering logs for kube-proxy [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4] ...
	I1009 18:49:58.119742  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4"
	I1009 18:49:58.161474  304033 logs.go:123] Gathering logs for kubelet ...
	I1009 18:49:58.161551  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 18:49:58.233492  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.018098    1507 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-527950' and this object
	W1009 18:49:58.233735  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.018147    1507 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:49:58.235601  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.075946    1507 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-527950' and this object
	W1009 18:49:58.235817  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.076004    1507 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:49:58.236010  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091197    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:49:58.236237  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091247    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:49:58.236423  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091297    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:49:58.236659  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091310    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	I1009 18:49:58.276246  304033 logs.go:123] Gathering logs for kube-apiserver [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06] ...
	I1009 18:49:58.276283  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06"
	I1009 18:49:58.350538  304033 logs.go:123] Gathering logs for kube-scheduler [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f] ...
	I1009 18:49:58.350572  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f"
	I1009 18:49:58.406979  304033 logs.go:123] Gathering logs for kube-controller-manager [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6] ...
	I1009 18:49:58.407011  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6"
	I1009 18:49:58.495065  304033 logs.go:123] Gathering logs for kindnet [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4] ...
	I1009 18:49:58.495100  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4"
	I1009 18:49:58.535687  304033 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:49:58.535717  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:49:58.630369  304033 logs.go:123] Gathering logs for container status ...
	I1009 18:49:58.630410  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:49:58.684190  304033 logs.go:123] Gathering logs for dmesg ...
	I1009 18:49:58.684219  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:49:58.700990  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:49:58.701015  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 18:49:58.701064  304033 out.go:270] X Problems detected in kubelet:
	W1009 18:49:58.701080  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.076004    1507 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:49:58.701088  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091197    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:49:58.701100  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091247    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:49:58.701108  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091297    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:49:58.701119  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091310    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	I1009 18:49:58.701126  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:49:58.701136  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:50:08.702790  304033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:50:08.718870  304033 api_server.go:72] duration metric: took 2m33.287899548s to wait for apiserver process to appear ...
	I1009 18:50:08.718903  304033 api_server.go:88] waiting for apiserver healthz status ...
	I1009 18:50:08.718942  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:50:08.719007  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:50:08.767775  304033 cri.go:89] found id: "4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06"
	I1009 18:50:08.767800  304033 cri.go:89] found id: ""
	I1009 18:50:08.767808  304033 logs.go:282] 1 containers: [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06]
	I1009 18:50:08.767889  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:08.771495  304033 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:50:08.771568  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:50:08.809558  304033 cri.go:89] found id: "0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987"
	I1009 18:50:08.809583  304033 cri.go:89] found id: ""
	I1009 18:50:08.809592  304033 logs.go:282] 1 containers: [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987]
	I1009 18:50:08.809650  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:08.813279  304033 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:50:08.813351  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:50:08.859789  304033 cri.go:89] found id: "a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6"
	I1009 18:50:08.859813  304033 cri.go:89] found id: ""
	I1009 18:50:08.859859  304033 logs.go:282] 1 containers: [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6]
	I1009 18:50:08.859920  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:08.863994  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:50:08.864072  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:50:08.908759  304033 cri.go:89] found id: "5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f"
	I1009 18:50:08.908784  304033 cri.go:89] found id: ""
	I1009 18:50:08.908794  304033 logs.go:282] 1 containers: [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f]
	I1009 18:50:08.908882  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:08.913541  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:50:08.913644  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:50:08.954544  304033 cri.go:89] found id: "b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4"
	I1009 18:50:08.954574  304033 cri.go:89] found id: ""
	I1009 18:50:08.954583  304033 logs.go:282] 1 containers: [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4]
	I1009 18:50:08.954642  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:08.958369  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:50:08.958456  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:50:08.999412  304033 cri.go:89] found id: "d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6"
	I1009 18:50:08.999434  304033 cri.go:89] found id: ""
	I1009 18:50:08.999444  304033 logs.go:282] 1 containers: [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6]
	I1009 18:50:08.999503  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:09.003616  304033 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:50:09.003701  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:50:09.052290  304033 cri.go:89] found id: "658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4"
	I1009 18:50:09.052370  304033 cri.go:89] found id: ""
	I1009 18:50:09.052387  304033 logs.go:282] 1 containers: [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4]
	I1009 18:50:09.052455  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:09.056781  304033 logs.go:123] Gathering logs for kindnet [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4] ...
	I1009 18:50:09.056807  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4"
	I1009 18:50:09.100616  304033 logs.go:123] Gathering logs for container status ...
	I1009 18:50:09.100654  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:50:09.164159  304033 logs.go:123] Gathering logs for kube-apiserver [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06] ...
	I1009 18:50:09.164190  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06"
	I1009 18:50:09.249227  304033 logs.go:123] Gathering logs for kube-scheduler [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f] ...
	I1009 18:50:09.249267  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f"
	I1009 18:50:09.301737  304033 logs.go:123] Gathering logs for kube-proxy [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4] ...
	I1009 18:50:09.301772  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4"
	I1009 18:50:09.341270  304033 logs.go:123] Gathering logs for kube-controller-manager [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6] ...
	I1009 18:50:09.341299  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6"
	I1009 18:50:09.407659  304033 logs.go:123] Gathering logs for coredns [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6] ...
	I1009 18:50:09.407696  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6"
	I1009 18:50:09.449189  304033 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:50:09.449229  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:50:09.548598  304033 logs.go:123] Gathering logs for kubelet ...
	I1009 18:50:09.548639  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 18:50:09.620389  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.018098    1507 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-527950' and this object
	W1009 18:50:09.620655  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.018147    1507 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:09.622474  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.075946    1507 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-527950' and this object
	W1009 18:50:09.622684  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.076004    1507 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:09.622869  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091197    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:09.623096  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091247    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:09.623283  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091297    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:09.623511  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091310    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	I1009 18:50:09.663532  304033 logs.go:123] Gathering logs for dmesg ...
	I1009 18:50:09.663558  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:50:09.680249  304033 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:50:09.680279  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 18:50:09.824449  304033 logs.go:123] Gathering logs for etcd [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987] ...
	I1009 18:50:09.824479  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987"
	I1009 18:50:09.871748  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:50:09.871776  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 18:50:09.871877  304033 out.go:270] X Problems detected in kubelet:
	W1009 18:50:09.871895  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.076004    1507 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:09.871917  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091197    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:09.871931  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091247    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:09.871946  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091297    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:09.871961  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091310    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	I1009 18:50:09.871968  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:50:09.871979  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:50:19.873265  304033 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 18:50:19.881156  304033 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 18:50:19.882175  304033 api_server.go:141] control plane version: v1.31.1
	I1009 18:50:19.882199  304033 api_server.go:131] duration metric: took 11.163288571s to wait for apiserver health ...
	I1009 18:50:19.882208  304033 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 18:50:19.882230  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:50:19.882293  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:50:19.926772  304033 cri.go:89] found id: "4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06"
	I1009 18:50:19.926800  304033 cri.go:89] found id: ""
	I1009 18:50:19.926809  304033 logs.go:282] 1 containers: [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06]
	I1009 18:50:19.926869  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:19.931070  304033 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:50:19.931146  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:50:19.970420  304033 cri.go:89] found id: "0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987"
	I1009 18:50:19.970442  304033 cri.go:89] found id: ""
	I1009 18:50:19.970450  304033 logs.go:282] 1 containers: [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987]
	I1009 18:50:19.970505  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:19.974056  304033 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:50:19.974130  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:50:20.034116  304033 cri.go:89] found id: "a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6"
	I1009 18:50:20.034145  304033 cri.go:89] found id: ""
	I1009 18:50:20.034155  304033 logs.go:282] 1 containers: [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6]
	I1009 18:50:20.034226  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:20.038516  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:50:20.038602  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:50:20.081871  304033 cri.go:89] found id: "5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f"
	I1009 18:50:20.081963  304033 cri.go:89] found id: ""
	I1009 18:50:20.082004  304033 logs.go:282] 1 containers: [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f]
	I1009 18:50:20.082135  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:20.086281  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:50:20.086481  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:50:20.132872  304033 cri.go:89] found id: "b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4"
	I1009 18:50:20.132899  304033 cri.go:89] found id: ""
	I1009 18:50:20.132949  304033 logs.go:282] 1 containers: [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4]
	I1009 18:50:20.133026  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:20.136819  304033 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:50:20.136905  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:50:20.181617  304033 cri.go:89] found id: "d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6"
	I1009 18:50:20.181649  304033 cri.go:89] found id: ""
	I1009 18:50:20.181659  304033 logs.go:282] 1 containers: [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6]
	I1009 18:50:20.181727  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:20.185837  304033 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:50:20.185948  304033 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:50:20.225997  304033 cri.go:89] found id: "658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4"
	I1009 18:50:20.226020  304033 cri.go:89] found id: ""
	I1009 18:50:20.226029  304033 logs.go:282] 1 containers: [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4]
	I1009 18:50:20.226106  304033 ssh_runner.go:195] Run: which crictl
	I1009 18:50:20.229688  304033 logs.go:123] Gathering logs for kube-scheduler [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f] ...
	I1009 18:50:20.229716  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f"
	I1009 18:50:20.283311  304033 logs.go:123] Gathering logs for kube-controller-manager [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6] ...
	I1009 18:50:20.283341  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6"
	I1009 18:50:20.356643  304033 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:50:20.356688  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:50:20.453176  304033 logs.go:123] Gathering logs for dmesg ...
	I1009 18:50:20.453218  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:50:20.469529  304033 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:50:20.469558  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 18:50:20.613959  304033 logs.go:123] Gathering logs for kube-apiserver [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06] ...
	I1009 18:50:20.613990  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06"
	I1009 18:50:20.665515  304033 logs.go:123] Gathering logs for etcd [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987] ...
	I1009 18:50:20.665558  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987"
	I1009 18:50:20.710862  304033 logs.go:123] Gathering logs for coredns [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6] ...
	I1009 18:50:20.710894  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6"
	I1009 18:50:20.755271  304033 logs.go:123] Gathering logs for kube-proxy [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4] ...
	I1009 18:50:20.755301  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4"
	I1009 18:50:20.794138  304033 logs.go:123] Gathering logs for kindnet [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4] ...
	I1009 18:50:20.794168  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4"
	I1009 18:50:20.839676  304033 logs.go:123] Gathering logs for container status ...
	I1009 18:50:20.839705  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:50:20.912684  304033 logs.go:123] Gathering logs for kubelet ...
	I1009 18:50:20.912735  304033 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 18:50:20.985478  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.018098    1507 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-527950' and this object
	W1009 18:50:20.985720  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.018147    1507 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:20.987574  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.075946    1507 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-527950' and this object
	W1009 18:50:20.987790  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.076004    1507 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:20.987983  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091197    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:20.988208  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091247    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:20.988400  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091297    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:20.988647  304033 logs.go:138] Found kubelet problem: Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091310    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	I1009 18:50:21.030574  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:50:21.030613  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 18:50:21.030677  304033 out.go:270] X Problems detected in kubelet:
	W1009 18:50:21.030693  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.076004    1507 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:21.030705  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091197    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:21.030714  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091247    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	W1009 18:50:21.030726  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: W1009 18:48:22.091297    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-527950" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-527950' and this object
	W1009 18:50:21.030733  304033 out.go:270]   Oct 09 18:48:22 addons-527950 kubelet[1507]: E1009 18:48:22.091310    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-527950\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-527950' and this object" logger="UnhandledError"
	I1009 18:50:21.030743  304033 out.go:358] Setting ErrFile to fd 2...
	I1009 18:50:21.030749  304033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:50:31.043130  304033 system_pods.go:59] 18 kube-system pods found
	I1009 18:50:31.043171  304033 system_pods.go:61] "coredns-7c65d6cfc9-6xlwc" [7b657db0-d9a2-4b1b-b040-1594861c5187] Running
	I1009 18:50:31.043178  304033 system_pods.go:61] "csi-hostpath-attacher-0" [9033f355-551e-4197-9103-ea28436fc20f] Running
	I1009 18:50:31.043183  304033 system_pods.go:61] "csi-hostpath-resizer-0" [6c901c73-0f64-445c-b5bb-1d56a1df3871] Running
	I1009 18:50:31.043187  304033 system_pods.go:61] "csi-hostpathplugin-fhvs9" [c5ab2902-7c70-48f4-9c56-2648607461bc] Running
	I1009 18:50:31.043192  304033 system_pods.go:61] "etcd-addons-527950" [a5818fe5-761e-4c9f-b467-145d8ad3a673] Running
	I1009 18:50:31.043196  304033 system_pods.go:61] "kindnet-c47lt" [c4190aa2-5c00-4d57-b499-893009495d85] Running
	I1009 18:50:31.043200  304033 system_pods.go:61] "kube-apiserver-addons-527950" [6c23db94-a291-469a-96d9-268a10403749] Running
	I1009 18:50:31.043205  304033 system_pods.go:61] "kube-controller-manager-addons-527950" [fdde003f-f7c2-4183-82f0-f0ea5f90142c] Running
	I1009 18:50:31.043211  304033 system_pods.go:61] "kube-ingress-dns-minikube" [32f8c970-729d-4715-8ee2-58f3193bef35] Running
	I1009 18:50:31.043215  304033 system_pods.go:61] "kube-proxy-ffxxn" [de0273e2-e2c4-41f3-af01-1896f193ada7] Running
	I1009 18:50:31.043218  304033 system_pods.go:61] "kube-scheduler-addons-527950" [f3058758-766c-46c7-baf3-0b9adde14be4] Running
	I1009 18:50:31.043223  304033 system_pods.go:61] "metrics-server-84c5f94fbc-2rc87" [e64a4405-7389-449b-b03d-16e9b8fca7b6] Running
	I1009 18:50:31.043227  304033 system_pods.go:61] "nvidia-device-plugin-daemonset-frbq8" [b905a7fe-20fc-4877-8f83-6613af7e0f2b] Running
	I1009 18:50:31.043234  304033 system_pods.go:61] "registry-66c9cd494c-dqnph" [63b4033a-0f05-44d5-becd-204fc75b1b5c] Running
	I1009 18:50:31.043238  304033 system_pods.go:61] "registry-proxy-l7mmn" [2399aceb-9c2c-40ca-9f5f-edd537c9676d] Running
	I1009 18:50:31.043242  304033 system_pods.go:61] "snapshot-controller-56fcc65765-7tc9k" [3d03cc14-c35a-4734-a2d8-efffe0f29a73] Running
	I1009 18:50:31.043248  304033 system_pods.go:61] "snapshot-controller-56fcc65765-rvjvp" [0623d873-45a3-4d01-bdec-f4a397c4712e] Running
	I1009 18:50:31.043252  304033 system_pods.go:61] "storage-provisioner" [3c893131-8d79-4974-a73a-7dd25740dbf4] Running
	I1009 18:50:31.043257  304033 system_pods.go:74] duration metric: took 11.161043865s to wait for pod list to return data ...
	I1009 18:50:31.043268  304033 default_sa.go:34] waiting for default service account to be created ...
	I1009 18:50:31.045688  304033 default_sa.go:45] found service account: "default"
	I1009 18:50:31.045718  304033 default_sa.go:55] duration metric: took 2.444131ms for default service account to be created ...
	I1009 18:50:31.045729  304033 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 18:50:31.056292  304033 system_pods.go:86] 18 kube-system pods found
	I1009 18:50:31.056329  304033 system_pods.go:89] "coredns-7c65d6cfc9-6xlwc" [7b657db0-d9a2-4b1b-b040-1594861c5187] Running
	I1009 18:50:31.056338  304033 system_pods.go:89] "csi-hostpath-attacher-0" [9033f355-551e-4197-9103-ea28436fc20f] Running
	I1009 18:50:31.056367  304033 system_pods.go:89] "csi-hostpath-resizer-0" [6c901c73-0f64-445c-b5bb-1d56a1df3871] Running
	I1009 18:50:31.056380  304033 system_pods.go:89] "csi-hostpathplugin-fhvs9" [c5ab2902-7c70-48f4-9c56-2648607461bc] Running
	I1009 18:50:31.056386  304033 system_pods.go:89] "etcd-addons-527950" [a5818fe5-761e-4c9f-b467-145d8ad3a673] Running
	I1009 18:50:31.056395  304033 system_pods.go:89] "kindnet-c47lt" [c4190aa2-5c00-4d57-b499-893009495d85] Running
	I1009 18:50:31.056400  304033 system_pods.go:89] "kube-apiserver-addons-527950" [6c23db94-a291-469a-96d9-268a10403749] Running
	I1009 18:50:31.056406  304033 system_pods.go:89] "kube-controller-manager-addons-527950" [fdde003f-f7c2-4183-82f0-f0ea5f90142c] Running
	I1009 18:50:31.056411  304033 system_pods.go:89] "kube-ingress-dns-minikube" [32f8c970-729d-4715-8ee2-58f3193bef35] Running
	I1009 18:50:31.056419  304033 system_pods.go:89] "kube-proxy-ffxxn" [de0273e2-e2c4-41f3-af01-1896f193ada7] Running
	I1009 18:50:31.056423  304033 system_pods.go:89] "kube-scheduler-addons-527950" [f3058758-766c-46c7-baf3-0b9adde14be4] Running
	I1009 18:50:31.056434  304033 system_pods.go:89] "metrics-server-84c5f94fbc-2rc87" [e64a4405-7389-449b-b03d-16e9b8fca7b6] Running
	I1009 18:50:31.056443  304033 system_pods.go:89] "nvidia-device-plugin-daemonset-frbq8" [b905a7fe-20fc-4877-8f83-6613af7e0f2b] Running
	I1009 18:50:31.056448  304033 system_pods.go:89] "registry-66c9cd494c-dqnph" [63b4033a-0f05-44d5-becd-204fc75b1b5c] Running
	I1009 18:50:31.056455  304033 system_pods.go:89] "registry-proxy-l7mmn" [2399aceb-9c2c-40ca-9f5f-edd537c9676d] Running
	I1009 18:50:31.056462  304033 system_pods.go:89] "snapshot-controller-56fcc65765-7tc9k" [3d03cc14-c35a-4734-a2d8-efffe0f29a73] Running
	I1009 18:50:31.056467  304033 system_pods.go:89] "snapshot-controller-56fcc65765-rvjvp" [0623d873-45a3-4d01-bdec-f4a397c4712e] Running
	I1009 18:50:31.056477  304033 system_pods.go:89] "storage-provisioner" [3c893131-8d79-4974-a73a-7dd25740dbf4] Running
	I1009 18:50:31.056485  304033 system_pods.go:126] duration metric: took 10.749976ms to wait for k8s-apps to be running ...
	I1009 18:50:31.056497  304033 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 18:50:31.056555  304033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:50:31.069211  304033 system_svc.go:56] duration metric: took 12.704113ms WaitForService to wait for kubelet
	I1009 18:50:31.069241  304033 kubeadm.go:582] duration metric: took 2m55.6382755s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:50:31.069261  304033 node_conditions.go:102] verifying NodePressure condition ...
	I1009 18:50:31.072390  304033 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 18:50:31.072424  304033 node_conditions.go:123] node cpu capacity is 2
	I1009 18:50:31.072435  304033 node_conditions.go:105] duration metric: took 3.168872ms to run NodePressure ...
	I1009 18:50:31.072448  304033 start.go:241] waiting for startup goroutines ...
	I1009 18:50:31.072455  304033 start.go:246] waiting for cluster config update ...
	I1009 18:50:31.072475  304033 start.go:255] writing updated cluster config ...
	I1009 18:50:31.072768  304033 ssh_runner.go:195] Run: rm -f paused
	I1009 18:50:31.436305  304033 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 18:50:31.438854  304033 out.go:177] * Done! kubectl is now configured to use "addons-527950" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:03:46 addons-527950 crio[969]: time="2024-10-09 19:03:46.546859422Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-mw22v Namespace:ingress-nginx ID:321c68f923445ef76e1530cd75bf4219197b5bfca01ef520b415d24fb9a33bb1 UID:56cf08ee-ecab-4d5b-a76a-9077a4a9a291 NetNS:/var/run/netns/d076ac65-4336-4fa1-b1c0-d95ebb6d993c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 09 19:03:46 addons-527950 crio[969]: time="2024-10-09 19:03:46.547009894Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-mw22v from CNI network \"kindnet\" (type=ptp)"
	Oct 09 19:03:46 addons-527950 crio[969]: time="2024-10-09 19:03:46.569511194Z" level=info msg="Stopped pod sandbox: 321c68f923445ef76e1530cd75bf4219197b5bfca01ef520b415d24fb9a33bb1" id=38a3cefd-c0d7-4b05-ac16-456b1bd106d7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:03:46 addons-527950 crio[969]: time="2024-10-09 19:03:46.711492257Z" level=info msg="Removing container: 8c0dbf1ad1c32a58ebcb98b3689e9a6d5e3e74805c16663a405e5eaac136f23d" id=885d453e-28a1-4e79-9c51-efe56ed1012a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:03:46 addons-527950 crio[969]: time="2024-10-09 19:03:46.727154711Z" level=info msg="Removed container 8c0dbf1ad1c32a58ebcb98b3689e9a6d5e3e74805c16663a405e5eaac136f23d: ingress-nginx/ingress-nginx-controller-bc57996ff-mw22v/controller" id=885d453e-28a1-4e79-9c51-efe56ed1012a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.658043257Z" level=info msg="Removing container: ffbe4cd106a2b5508417a868fc3ba4c43c05888d1468f8341d28df0155412a02" id=a4f8f1b4-e51e-4faa-9e40-8f6e5c4b20d1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.674779156Z" level=info msg="Removed container ffbe4cd106a2b5508417a868fc3ba4c43c05888d1468f8341d28df0155412a02: ingress-nginx/ingress-nginx-admission-create-4xppt/create" id=a4f8f1b4-e51e-4faa-9e40-8f6e5c4b20d1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.676209061Z" level=info msg="Removing container: faf91b9af24fcd1ef1b64eaaa8ef81bbdd879c1bcde52b7dade1876be0d67d25" id=20d72937-df08-4d8d-913e-4ba11f06925b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.692562549Z" level=info msg="Removed container faf91b9af24fcd1ef1b64eaaa8ef81bbdd879c1bcde52b7dade1876be0d67d25: ingress-nginx/ingress-nginx-admission-patch-h88zl/patch" id=20d72937-df08-4d8d-913e-4ba11f06925b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.693822675Z" level=info msg="Stopping pod sandbox: e577c1f75939e132958c5526c8ae5182fa3a98f0f63c30afbd9ba3c2c5c95e5a" id=e879b011-39ee-4df4-94f0-1f9f871ce1e5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.693865563Z" level=info msg="Stopped pod sandbox (already stopped): e577c1f75939e132958c5526c8ae5182fa3a98f0f63c30afbd9ba3c2c5c95e5a" id=e879b011-39ee-4df4-94f0-1f9f871ce1e5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.694352424Z" level=info msg="Removing pod sandbox: e577c1f75939e132958c5526c8ae5182fa3a98f0f63c30afbd9ba3c2c5c95e5a" id=7fde581d-c2fe-48cc-8c6e-e1e8578538dc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.705810825Z" level=info msg="Removed pod sandbox: e577c1f75939e132958c5526c8ae5182fa3a98f0f63c30afbd9ba3c2c5c95e5a" id=7fde581d-c2fe-48cc-8c6e-e1e8578538dc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.706389468Z" level=info msg="Stopping pod sandbox: 321c68f923445ef76e1530cd75bf4219197b5bfca01ef520b415d24fb9a33bb1" id=26cfd9f1-b2f8-4d0e-9298-f0880d704881 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.706430722Z" level=info msg="Stopped pod sandbox (already stopped): 321c68f923445ef76e1530cd75bf4219197b5bfca01ef520b415d24fb9a33bb1" id=26cfd9f1-b2f8-4d0e-9298-f0880d704881 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.706836977Z" level=info msg="Removing pod sandbox: 321c68f923445ef76e1530cd75bf4219197b5bfca01ef520b415d24fb9a33bb1" id=bcae03c2-fac9-476e-a7b2-0feb96d5dd97 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.717536817Z" level=info msg="Removed pod sandbox: 321c68f923445ef76e1530cd75bf4219197b5bfca01ef520b415d24fb9a33bb1" id=bcae03c2-fac9-476e-a7b2-0feb96d5dd97 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.718247117Z" level=info msg="Stopping pod sandbox: e6f0ed9c301652e8bd068cedc7c30e4f1a116781c18cf91832b0c090c61788b7" id=f7a619fc-fb86-444a-b227-132c2be5460e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.718293861Z" level=info msg="Stopped pod sandbox (already stopped): e6f0ed9c301652e8bd068cedc7c30e4f1a116781c18cf91832b0c090c61788b7" id=f7a619fc-fb86-444a-b227-132c2be5460e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.718633671Z" level=info msg="Removing pod sandbox: e6f0ed9c301652e8bd068cedc7c30e4f1a116781c18cf91832b0c090c61788b7" id=491f4c32-006d-4221-88b6-7fb12b3671ee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.728169375Z" level=info msg="Removed pod sandbox: e6f0ed9c301652e8bd068cedc7c30e4f1a116781c18cf91832b0c090c61788b7" id=491f4c32-006d-4221-88b6-7fb12b3671ee name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.728661389Z" level=info msg="Stopping pod sandbox: e4abe729152df581b9315f7fd2c5303f3c7fb136a8c8c1741858851c36b76f31" id=df0b777d-34db-4a74-b3aa-9fd9c9047206 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.728698796Z" level=info msg="Stopped pod sandbox (already stopped): e4abe729152df581b9315f7fd2c5303f3c7fb136a8c8c1741858851c36b76f31" id=df0b777d-34db-4a74-b3aa-9fd9c9047206 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.729338385Z" level=info msg="Removing pod sandbox: e4abe729152df581b9315f7fd2c5303f3c7fb136a8c8c1741858851c36b76f31" id=89acde6d-7e5e-402b-b390-5e126f9a75f2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:04:30 addons-527950 crio[969]: time="2024-10-09 19:04:30.738586938Z" level=info msg="Removed pod sandbox: e4abe729152df581b9315f7fd2c5303f3c7fb136a8c8c1741858851c36b76f31" id=89acde6d-7e5e-402b-b390-5e126f9a75f2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	98d5082f8a293       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   8bba71988dce5       hello-world-app-55bf9c44b4-5xw8s
	b18b65bf88187       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     3 minutes ago        Running             busybox                   0                   14247c109f31e       busybox
	f02ff73153e5e       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         3 minutes ago        Running             nginx                     0                   92fd777867eab       nginx
	b67d45aed99f6       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   16 minutes ago       Running             metrics-server            0                   524107a4c0848       metrics-server-84c5f94fbc-2rc87
	a03317a4b67c6       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        16 minutes ago       Running             coredns                   0                   d0da5a2170e78       coredns-7c65d6cfc9-6xlwc
	799e144a48722       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        16 minutes ago       Running             storage-provisioner       0                   9c95e646a7d47       storage-provisioner
	658aec82c4852       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387                      17 minutes ago       Running             kindnet-cni               0                   c45520dd5f0ea       kindnet-c47lt
	b83030b4e3f98       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        17 minutes ago       Running             kube-proxy                0                   a06994e24d95d       kube-proxy-ffxxn
	5f813d31843f7       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        17 minutes ago       Running             kube-scheduler            0                   26277e30ab828       kube-scheduler-addons-527950
	4f5e96ddc0d0d       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        17 minutes ago       Running             kube-apiserver            0                   ab714852cadcb       kube-apiserver-addons-527950
	0217e145b7d2d       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        17 minutes ago       Running             etcd                      0                   e6459e5817828       etcd-addons-527950
	d67726f6c9dd2       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        17 minutes ago       Running             kube-controller-manager   0                   b8081ea9d38e8       kube-controller-manager-addons-527950
	
	
	==> coredns [a03317a4b67c67af71e3307776fc35ea99894b84967c8f1e52c19dc8ad7e14b6] <==
	[INFO] 10.244.0.19:40119 - 61198 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066411s
	[INFO] 10.244.0.19:38017 - 2538 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003902625s
	[INFO] 10.244.0.19:40119 - 7213 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002963536s
	[INFO] 10.244.0.19:38017 - 22405 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001988764s
	[INFO] 10.244.0.19:40119 - 22924 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002258652s
	[INFO] 10.244.0.19:38017 - 27814 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000153597s
	[INFO] 10.244.0.19:40119 - 43001 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000142603s
	[INFO] 10.244.0.19:37589 - 12877 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000125151s
	[INFO] 10.244.0.19:35665 - 62504 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000039261s
	[INFO] 10.244.0.19:37589 - 44151 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000092397s
	[INFO] 10.244.0.19:35665 - 56418 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000063253s
	[INFO] 10.244.0.19:35665 - 211 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076922s
	[INFO] 10.244.0.19:37589 - 16863 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056631s
	[INFO] 10.244.0.19:37589 - 14604 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068938s
	[INFO] 10.244.0.19:35665 - 4557 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000053858s
	[INFO] 10.244.0.19:35665 - 18435 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061603s
	[INFO] 10.244.0.19:37589 - 21920 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055022s
	[INFO] 10.244.0.19:35665 - 54524 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060077s
	[INFO] 10.244.0.19:37589 - 21147 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003561s
	[INFO] 10.244.0.19:37589 - 7399 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001140718s
	[INFO] 10.244.0.19:35665 - 3541 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001760345s
	[INFO] 10.244.0.19:35665 - 23418 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001028105s
	[INFO] 10.244.0.19:37589 - 28254 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001582124s
	[INFO] 10.244.0.19:37589 - 792 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000045202s
	[INFO] 10.244.0.19:35665 - 48658 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00020725s
	
	
	==> describe nodes <==
	Name:               addons-527950
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-527950
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=addons-527950
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T18_47_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-527950
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 18:47:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-527950
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:04:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:04:11 +0000   Wed, 09 Oct 2024 18:47:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:04:11 +0000   Wed, 09 Oct 2024 18:47:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:04:11 +0000   Wed, 09 Oct 2024 18:47:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:04:11 +0000   Wed, 09 Oct 2024 18:48:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-527950
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1a4c86c9e134492be73c30935e3dc26
	  System UUID:                cd82509e-5289-4bc4-999f-dab5bb62981a
	  Boot ID:                    0eb94caa-53b6-43b0-a9b7-c0b1f1bd6146
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-5xw8s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 coredns-7c65d6cfc9-6xlwc                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-addons-527950                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-c47lt                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-addons-527950             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-527950    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-ffxxn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-527950             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-84c5f94fbc-2rc87          100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         17m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 17m   kube-proxy       
	  Normal   Starting                 17m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m   kubelet          Node addons-527950 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m   kubelet          Node addons-527950 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m   kubelet          Node addons-527950 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m   node-controller  Node addons-527950 event: Registered Node addons-527950 in Controller
	  Normal   NodeReady                16m   kubelet          Node addons-527950 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015488] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.459769] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.052269] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.016609] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.591205] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.456398] kauditd_printk_skb: 34 callbacks suppressed
	[Oct 9 17:28] hrtimer: interrupt took 5876350 ns
	[Oct 9 17:53] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [0217e145b7d2d8c9ba3570f5a07d1afe795ccd1162fb5916b602377a117ec987] <==
	{"level":"info","ts":"2024-10-09T18:47:23.667994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-09T18:47:23.668044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-10-09T18:47:23.668089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-10-09T18:47:23.668123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-09T18:47:23.668161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-10-09T18:47:23.668196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-09T18:47:23.671841Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:47:23.673734Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-527950 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T18:47:23.675856Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T18:47:23.675880Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:47:23.676011Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:47:23.676073Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:47:23.676129Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T18:47:23.676163Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T18:47:23.676198Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T18:47:23.676869Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T18:47:23.677798Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-09T18:47:23.676869Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T18:47:23.685106Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-10-09T18:57:25.505410Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1490}
	{"level":"info","ts":"2024-10-09T18:57:25.536695Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1490,"took":"30.721598ms","hash":3020432765,"current-db-size-bytes":6037504,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3137536,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2024-10-09T18:57:25.536750Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3020432765,"revision":1490,"compact-revision":-1}
	{"level":"info","ts":"2024-10-09T19:02:25.510882Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1909}
	{"level":"info","ts":"2024-10-09T19:02:25.528241Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1909,"took":"16.748567ms","hash":2211837678,"current-db-size-bytes":6037504,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":4411392,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-10-09T19:02:25.528297Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2211837678,"revision":1909,"compact-revision":1490}
	
	
	==> kernel <==
	 19:04:53 up  2:47,  0 users,  load average: 0.20, 0.36, 0.56
	Linux addons-527950 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [658aec82c4852b9ac891f57363ffd3b04f2d0a3420181e9a3a44999d10e8c4c4] <==
	I1009 19:02:51.623488       1 main.go:300] handling current node
	I1009 19:03:01.628045       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:01.628078       1 main.go:300] handling current node
	I1009 19:03:11.623927       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:11.623960       1 main.go:300] handling current node
	I1009 19:03:21.620826       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:21.620859       1 main.go:300] handling current node
	I1009 19:03:31.621761       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:31.621794       1 main.go:300] handling current node
	I1009 19:03:41.621293       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:41.621325       1 main.go:300] handling current node
	I1009 19:03:51.620816       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:03:51.620845       1 main.go:300] handling current node
	I1009 19:04:01.628874       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:04:01.628991       1 main.go:300] handling current node
	I1009 19:04:11.620875       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:04:11.620911       1 main.go:300] handling current node
	I1009 19:04:21.620828       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:04:21.620863       1 main.go:300] handling current node
	I1009 19:04:31.628526       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:04:31.628571       1 main.go:300] handling current node
	I1009 19:04:41.621005       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:04:41.621043       1 main.go:300] handling current node
	I1009 19:04:51.625233       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:04:51.625267       1 main.go:300] handling current node
	
	
	==> kube-apiserver [4f5e96ddc0d0d74f7ab0305594c6e6e0fb46a456d132d3c0bc1cc3c4990fba06] <==
	I1009 18:58:45.052136       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.239.160"}
	E1009 18:59:18.858910       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1009 18:59:19.632162       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1009 18:59:19.642659       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1009 18:59:19.652821       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1009 18:59:34.655531       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1009 19:00:26.824119       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1009 19:00:58.167589       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 19:00:58.167650       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 19:00:58.201909       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 19:00:58.202034       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 19:00:58.223056       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 19:00:58.224118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 19:00:58.305794       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 19:00:58.305915       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 19:00:58.441489       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 19:00:58.441615       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1009 19:00:59.305838       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1009 19:00:59.442058       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1009 19:00:59.446587       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1009 19:01:12.023912       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1009 19:01:13.068312       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1009 19:01:17.583406       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1009 19:01:17.871534       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.211.191"}
	I1009 19:03:38.043333       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.71.197"}
	
	
	==> kube-controller-manager [d67726f6c9dd27fa5d2998b23d997e838f210462c24da414c44a4792d1f909d6] <==
	I1009 19:03:37.811091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.542888ms"
	I1009 19:03:37.823700       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.096765ms"
	I1009 19:03:37.824477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="51.282µs"
	I1009 19:03:37.830895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.251µs"
	I1009 19:03:39.717009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.391063ms"
	I1009 19:03:39.717820       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="32.968µs"
	I1009 19:03:43.360549       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I1009 19:03:43.364992       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="6.105µs"
	I1009 19:03:43.371196       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W1009 19:03:53.275614       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:03:53.275679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1009 19:03:53.500297       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W1009 19:03:53.731359       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:03:53.731401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1009 19:04:11.242232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-527950"
	W1009 19:04:14.794897       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:04:14.794941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:04:19.745369       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:04:19.745414       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:04:41.110309       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:04:41.110355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:04:46.577003       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:04:46.577047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:04:50.055253       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:04:50.055405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [b83030b4e3f98b7d0ccf90c78c213e4dc9e93a713c6b70dee192ccf9ad0dd7e4] <==
	I1009 18:47:42.077293       1 server_linux.go:66] "Using iptables proxy"
	I1009 18:47:42.418800       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1009 18:47:42.419065       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:47:42.511587       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 18:47:42.511728       1 server_linux.go:169] "Using iptables Proxier"
	I1009 18:47:42.514954       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:47:42.517983       1 server.go:483] "Version info" version="v1.31.1"
	I1009 18:47:42.518088       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:47:42.519931       1 config.go:199] "Starting service config controller"
	I1009 18:47:42.520031       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 18:47:42.520066       1 config.go:105] "Starting endpoint slice config controller"
	I1009 18:47:42.520071       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 18:47:42.520890       1 config.go:328] "Starting node config controller"
	I1009 18:47:42.520952       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 18:47:42.621925       1 shared_informer.go:320] Caches are synced for node config
	I1009 18:47:42.637600       1 shared_informer.go:320] Caches are synced for service config
	I1009 18:47:42.637637       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5f813d31843f7327997d8b712325d1d09b6220fa8be93661fd1eca58b536bc4f] <==
	W1009 18:47:27.693753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 18:47:27.696664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:27.693786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 18:47:27.696695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:27.693820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 18:47:27.696734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:27.693888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 18:47:27.696753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.549655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 18:47:28.549702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.677965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 18:47:28.678019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.686047       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 18:47:28.686090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.693367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 18:47:28.693412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.699003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 18:47:28.699047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.718753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 18:47:28.718866       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.809951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 18:47:28.810070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:28.817852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 18:47:28.817945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1009 18:47:29.278732       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 19:03:46 addons-527950 kubelet[1507]: I1009 19:03:46.675959    1507 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56cf08ee-ecab-4d5b-a76a-9077a4a9a291-kube-api-access-cxdhx" (OuterVolumeSpecName: "kube-api-access-cxdhx") pod "56cf08ee-ecab-4d5b-a76a-9077a4a9a291" (UID: "56cf08ee-ecab-4d5b-a76a-9077a4a9a291"). InnerVolumeSpecName "kube-api-access-cxdhx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 09 19:03:46 addons-527950 kubelet[1507]: I1009 19:03:46.710055    1507 scope.go:117] "RemoveContainer" containerID="8c0dbf1ad1c32a58ebcb98b3689e9a6d5e3e74805c16663a405e5eaac136f23d"
	Oct 09 19:03:46 addons-527950 kubelet[1507]: I1009 19:03:46.727418    1507 scope.go:117] "RemoveContainer" containerID="8c0dbf1ad1c32a58ebcb98b3689e9a6d5e3e74805c16663a405e5eaac136f23d"
	Oct 09 19:03:46 addons-527950 kubelet[1507]: E1009 19:03:46.727819    1507 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c0dbf1ad1c32a58ebcb98b3689e9a6d5e3e74805c16663a405e5eaac136f23d\": container with ID starting with 8c0dbf1ad1c32a58ebcb98b3689e9a6d5e3e74805c16663a405e5eaac136f23d not found: ID does not exist" containerID="8c0dbf1ad1c32a58ebcb98b3689e9a6d5e3e74805c16663a405e5eaac136f23d"
	Oct 09 19:03:46 addons-527950 kubelet[1507]: I1009 19:03:46.727877    1507 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0dbf1ad1c32a58ebcb98b3689e9a6d5e3e74805c16663a405e5eaac136f23d"} err="failed to get container status \"8c0dbf1ad1c32a58ebcb98b3689e9a6d5e3e74805c16663a405e5eaac136f23d\": rpc error: code = NotFound desc = could not find container \"8c0dbf1ad1c32a58ebcb98b3689e9a6d5e3e74805c16663a405e5eaac136f23d\": container with ID starting with 8c0dbf1ad1c32a58ebcb98b3689e9a6d5e3e74805c16663a405e5eaac136f23d not found: ID does not exist"
	Oct 09 19:03:46 addons-527950 kubelet[1507]: I1009 19:03:46.771576    1507 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/56cf08ee-ecab-4d5b-a76a-9077a4a9a291-webhook-cert\") on node \"addons-527950\" DevicePath \"\""
	Oct 09 19:03:46 addons-527950 kubelet[1507]: I1009 19:03:46.771620    1507 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cxdhx\" (UniqueName: \"kubernetes.io/projected/56cf08ee-ecab-4d5b-a76a-9077a4a9a291-kube-api-access-cxdhx\") on node \"addons-527950\" DevicePath \"\""
	Oct 09 19:03:48 addons-527950 kubelet[1507]: I1009 19:03:48.305207    1507 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56cf08ee-ecab-4d5b-a76a-9077a4a9a291" path="/var/lib/kubelet/pods/56cf08ee-ecab-4d5b-a76a-9077a4a9a291/volumes"
	Oct 09 19:03:50 addons-527950 kubelet[1507]: E1009 19:03:50.651740    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500630651486927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:50 addons-527950 kubelet[1507]: E1009 19:03:50.651772    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500630651486927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:04:00 addons-527950 kubelet[1507]: E1009 19:04:00.655250    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500640654936757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:04:00 addons-527950 kubelet[1507]: E1009 19:04:00.655290    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500640654936757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:04:10 addons-527950 kubelet[1507]: E1009 19:04:10.658343    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500650658082041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:04:10 addons-527950 kubelet[1507]: E1009 19:04:10.658379    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500650658082041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:04:20 addons-527950 kubelet[1507]: E1009 19:04:20.661604    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500660661358022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:04:20 addons-527950 kubelet[1507]: E1009 19:04:20.661765    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500660661358022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:04:30 addons-527950 kubelet[1507]: I1009 19:04:30.656522    1507 scope.go:117] "RemoveContainer" containerID="ffbe4cd106a2b5508417a868fc3ba4c43c05888d1468f8341d28df0155412a02"
	Oct 09 19:04:30 addons-527950 kubelet[1507]: E1009 19:04:30.664728    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500670664490216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:04:30 addons-527950 kubelet[1507]: E1009 19:04:30.664762    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500670664490216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:04:30 addons-527950 kubelet[1507]: I1009 19:04:30.675023    1507 scope.go:117] "RemoveContainer" containerID="faf91b9af24fcd1ef1b64eaaa8ef81bbdd879c1bcde52b7dade1876be0d67d25"
	Oct 09 19:04:40 addons-527950 kubelet[1507]: E1009 19:04:40.667782    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500680667544902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:04:40 addons-527950 kubelet[1507]: E1009 19:04:40.667814    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500680667544902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:04:50 addons-527950 kubelet[1507]: E1009 19:04:50.671440    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500690671189408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:04:50 addons-527950 kubelet[1507]: E1009 19:04:50.671480    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500690671189408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606905,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:04:51 addons-527950 kubelet[1507]: I1009 19:04:51.303201    1507 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [799e144a48722ce5cda01719fdb32be7ddb6725e7b6e5f91aac8ca8c0dcde633] <==
	I1009 18:48:22.888446       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 18:48:22.929675       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 18:48:22.929750       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 18:48:22.949509       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 18:48:22.949676       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-527950_64395e50-c268-4440-b675-d6757c097dd6!
	I1009 18:48:22.964178       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2bc5529d-9755-4c7c-824d-aeaa87ec6d9e", APIVersion:"v1", ResourceVersion:"873", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-527950_64395e50-c268-4440-b675-d6757c097dd6 became leader
	I1009 18:48:23.050065       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-527950_64395e50-c268-4440-b675-d6757c097dd6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-527950 -n addons-527950
helpers_test.go:261: (dbg) Run:  kubectl --context addons-527950 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (336.30s)

                                                
                                    

Test pass (296/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.66
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 6.65
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 230.54
31 TestAddons/serial/GCPAuth/Namespaces 0.21
34 TestAddons/parallel/Registry 17.67
36 TestAddons/parallel/InspektorGadget 11.77
39 TestAddons/parallel/CSI 63.28
40 TestAddons/parallel/Headlamp 16.83
41 TestAddons/parallel/CloudSpanner 5.85
42 TestAddons/parallel/LocalPath 53.83
43 TestAddons/parallel/NvidiaDevicePlugin 6.57
44 TestAddons/parallel/Yakd 12.01
45 TestAddons/StoppedEnableDisable 12.13
46 TestCertOptions 37.14
47 TestCertExpiration 271.67
49 TestForceSystemdFlag 35.97
50 TestForceSystemdEnv 36.57
56 TestErrorSpam/setup 30.1
57 TestErrorSpam/start 0.75
58 TestErrorSpam/status 1.15
59 TestErrorSpam/pause 1.83
60 TestErrorSpam/unpause 1.94
61 TestErrorSpam/stop 1.45
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 50.77
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 27.86
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.35
73 TestFunctional/serial/CacheCmd/cache/add_local 1.41
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.2
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.16
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
81 TestFunctional/serial/ExtraConfig 33.38
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.74
84 TestFunctional/serial/LogsFileCmd 2.24
85 TestFunctional/serial/InvalidService 4
87 TestFunctional/parallel/ConfigCmd 0.54
88 TestFunctional/parallel/DashboardCmd 13.97
89 TestFunctional/parallel/DryRun 0.65
90 TestFunctional/parallel/InternationalLanguage 0.25
91 TestFunctional/parallel/StatusCmd 1.27
95 TestFunctional/parallel/ServiceCmdConnect 11.67
96 TestFunctional/parallel/AddonsCmd 0.16
97 TestFunctional/parallel/PersistentVolumeClaim 25.1
99 TestFunctional/parallel/SSHCmd 0.65
100 TestFunctional/parallel/CpCmd 2.29
102 TestFunctional/parallel/FileSync 0.27
103 TestFunctional/parallel/CertSync 1.93
107 TestFunctional/parallel/NodeLabels 0.12
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
111 TestFunctional/parallel/License 0.41
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.51
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
125 TestFunctional/parallel/ProfileCmd/profile_list 0.42
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
127 TestFunctional/parallel/MountCmd/any-port 7.97
128 TestFunctional/parallel/ServiceCmd/List 0.5
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
131 TestFunctional/parallel/ServiceCmd/Format 0.37
132 TestFunctional/parallel/ServiceCmd/URL 0.36
133 TestFunctional/parallel/MountCmd/specific-port 1.62
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.13
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
141 TestFunctional/parallel/ImageCommands/ImageBuild 4.05
142 TestFunctional/parallel/ImageCommands/Setup 0.84
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.55
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.13
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.09
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 181.48
160 TestMultiControlPlane/serial/DeployApp 8.26
161 TestMultiControlPlane/serial/PingHostFromPods 1.59
162 TestMultiControlPlane/serial/AddWorkerNode 34.29
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
165 TestMultiControlPlane/serial/CopyFile 19.21
166 TestMultiControlPlane/serial/StopSecondaryNode 12.72
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
168 TestMultiControlPlane/serial/RestartSecondaryNode 25.07
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.27
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 205.29
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.53
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
173 TestMultiControlPlane/serial/StopCluster 36.04
174 TestMultiControlPlane/serial/RestartCluster 64.87
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
176 TestMultiControlPlane/serial/AddSecondaryNode 75.04
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
181 TestJSONOutput/start/Command 50.2
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.76
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.66
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.87
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.24
206 TestKicCustomNetwork/create_custom_network 37.35
207 TestKicCustomNetwork/use_default_bridge_network 34
208 TestKicExistingNetwork 32.19
209 TestKicCustomSubnet 31.82
210 TestKicStaticIP 35.7
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 70.01
215 TestMountStart/serial/StartWithMountFirst 7.06
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 6.8
218 TestMountStart/serial/VerifyMountSecond 0.25
219 TestMountStart/serial/DeleteFirst 1.66
220 TestMountStart/serial/VerifyMountPostDelete 0.27
221 TestMountStart/serial/Stop 1.21
222 TestMountStart/serial/RestartStopped 8.72
223 TestMountStart/serial/VerifyMountPostStop 0.25
226 TestMultiNode/serial/FreshStart2Nodes 78.5
227 TestMultiNode/serial/DeployApp2Nodes 7.49
228 TestMultiNode/serial/PingHostFrom2Pods 1
229 TestMultiNode/serial/AddNode 29.91
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.68
232 TestMultiNode/serial/CopyFile 9.99
233 TestMultiNode/serial/StopNode 2.29
234 TestMultiNode/serial/StartAfterStop 9.76
235 TestMultiNode/serial/RestartKeepsNodes 93.51
236 TestMultiNode/serial/DeleteNode 6.15
237 TestMultiNode/serial/StopMultiNode 23.85
238 TestMultiNode/serial/RestartMultiNode 51.53
239 TestMultiNode/serial/ValidateNameConflict 34.39
244 TestPreload 143.44
246 TestScheduledStopUnix 106.76
249 TestInsufficientStorage 10.55
250 TestRunningBinaryUpgrade 81.98
252 TestKubernetesUpgrade 138.94
253 TestMissingContainerUpgrade 163.54
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 39.06
257 TestNoKubernetes/serial/StartWithStopK8s 8.64
258 TestNoKubernetes/serial/Start 8.56
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
260 TestNoKubernetes/serial/ProfileList 2.73
261 TestNoKubernetes/serial/Stop 1.28
262 TestNoKubernetes/serial/StartNoArgs 7.46
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
264 TestStoppedBinaryUpgrade/Setup 0.75
265 TestStoppedBinaryUpgrade/Upgrade 83.92
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.34
275 TestPause/serial/Start 62.82
276 TestPause/serial/SecondStartNoReconfiguration 35.57
284 TestNetworkPlugins/group/false 5.8
288 TestPause/serial/Pause 0.96
289 TestPause/serial/VerifyStatus 0.41
290 TestPause/serial/Unpause 0.92
291 TestPause/serial/PauseAgain 1.15
292 TestPause/serial/DeletePaused 3.16
293 TestPause/serial/VerifyDeletedResources 0.46
295 TestStartStop/group/old-k8s-version/serial/FirstStart 128.42
296 TestStartStop/group/old-k8s-version/serial/DeployApp 11.65
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
298 TestStartStop/group/old-k8s-version/serial/Stop 11.99
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
300 TestStartStop/group/old-k8s-version/serial/SecondStart 136.58
302 TestStartStop/group/no-preload/serial/FirstStart 65.3
303 TestStartStop/group/no-preload/serial/DeployApp 10.38
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.41
305 TestStartStop/group/no-preload/serial/Stop 12
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.15
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
310 TestStartStop/group/no-preload/serial/SecondStart 278.29
311 TestStartStop/group/old-k8s-version/serial/Pause 3.24
313 TestStartStop/group/embed-certs/serial/FirstStart 57.59
314 TestStartStop/group/embed-certs/serial/DeployApp 10.34
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
316 TestStartStop/group/embed-certs/serial/Stop 12
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
318 TestStartStop/group/embed-certs/serial/SecondStart 265.92
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
322 TestStartStop/group/no-preload/serial/Pause 3.04
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.58
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.37
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.11
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
331 TestStartStop/group/embed-certs/serial/Pause 2.95
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 298.22
335 TestStartStop/group/newest-cni/serial/FirstStart 45.03
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
338 TestStartStop/group/newest-cni/serial/Stop 2.17
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
340 TestStartStop/group/newest-cni/serial/SecondStart 17.5
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
344 TestStartStop/group/newest-cni/serial/Pause 3.2
345 TestNetworkPlugins/group/auto/Start 49.1
346 TestNetworkPlugins/group/auto/KubeletFlags 0.3
347 TestNetworkPlugins/group/auto/NetCatPod 10.29
348 TestNetworkPlugins/group/auto/DNS 0.2
349 TestNetworkPlugins/group/auto/Localhost 0.17
350 TestNetworkPlugins/group/auto/HairPin 0.16
351 TestNetworkPlugins/group/kindnet/Start 51.71
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
354 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
355 TestNetworkPlugins/group/kindnet/DNS 0.2
356 TestNetworkPlugins/group/kindnet/Localhost 0.15
357 TestNetworkPlugins/group/kindnet/HairPin 0.16
358 TestNetworkPlugins/group/calico/Start 68.08
359 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
361 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
362 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.47
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/custom-flannel/Start 65.66
365 TestNetworkPlugins/group/calico/KubeletFlags 0.35
366 TestNetworkPlugins/group/calico/NetCatPod 12.3
367 TestNetworkPlugins/group/calico/DNS 1.21
368 TestNetworkPlugins/group/calico/Localhost 0.24
369 TestNetworkPlugins/group/calico/HairPin 0.28
370 TestNetworkPlugins/group/enable-default-cni/Start 81.39
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.32
373 TestNetworkPlugins/group/custom-flannel/DNS 0.23
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
376 TestNetworkPlugins/group/flannel/Start 53.39
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.44
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/bridge/Start 78.9
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
385 TestNetworkPlugins/group/flannel/NetCatPod 12.31
386 TestNetworkPlugins/group/flannel/DNS 0.29
387 TestNetworkPlugins/group/flannel/Localhost 0.23
388 TestNetworkPlugins/group/flannel/HairPin 0.2
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
390 TestNetworkPlugins/group/bridge/NetCatPod 10.25
391 TestNetworkPlugins/group/bridge/DNS 0.18
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (9.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-041328 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-041328 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.655261657s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1009 18:46:31.883725  303278 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1009 18:46:31.883805  303278 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-041328
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-041328: exit status 85 (79.240131ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-041328 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |          |
	|         | -p download-only-041328        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:46:22
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:46:22.275425  303283 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:46:22.275606  303283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:22.275619  303283 out.go:358] Setting ErrFile to fd 2...
	I1009 18:46:22.275625  303283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:22.275933  303283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
	W1009 18:46:22.276078  303283 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19780-297764/.minikube/config/config.json: open /home/jenkins/minikube-integration/19780-297764/.minikube/config/config.json: no such file or directory
	I1009 18:46:22.276498  303283 out.go:352] Setting JSON to true
	I1009 18:46:22.277350  303283 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8930,"bootTime":1728490653,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 18:46:22.277435  303283 start.go:139] virtualization:  
	I1009 18:46:22.281017  303283 out.go:97] [download-only-041328] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1009 18:46:22.281187  303283 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 18:46:22.281230  303283 notify.go:220] Checking for updates...
	I1009 18:46:22.283465  303283 out.go:169] MINIKUBE_LOCATION=19780
	I1009 18:46:22.285976  303283 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:46:22.288136  303283 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig
	I1009 18:46:22.290056  303283 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube
	I1009 18:46:22.291924  303283 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1009 18:46:22.296528  303283 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:46:22.296795  303283 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:46:22.320148  303283 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 18:46:22.320286  303283 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:22.386493  303283 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 18:46:22.376112888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:22.386621  303283 docker.go:318] overlay module found
	I1009 18:46:22.389723  303283 out.go:97] Using the docker driver based on user configuration
	I1009 18:46:22.389758  303283 start.go:297] selected driver: docker
	I1009 18:46:22.389766  303283 start.go:901] validating driver "docker" against <nil>
	I1009 18:46:22.389882  303283 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:22.452995  303283 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 18:46:22.443625735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:22.453221  303283 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:46:22.453519  303283 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1009 18:46:22.453681  303283 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:46:22.456400  303283 out.go:169] Using Docker driver with root privileges
	I1009 18:46:22.458415  303283 cni.go:84] Creating CNI manager for ""
	I1009 18:46:22.458481  303283 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:46:22.458495  303283 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:46:22.458604  303283 start.go:340] cluster config:
	{Name:download-only-041328 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-041328 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:46:22.460568  303283 out.go:97] Starting "download-only-041328" primary control-plane node in "download-only-041328" cluster
	I1009 18:46:22.460599  303283 cache.go:121] Beginning downloading kic base image for docker with crio
	I1009 18:46:22.462877  303283 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1009 18:46:22.462914  303283 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 18:46:22.463089  303283 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1009 18:46:22.479236  303283 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1009 18:46:22.479418  303283 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1009 18:46:22.479508  303283 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1009 18:46:22.524246  303283 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1009 18:46:22.524273  303283 cache.go:56] Caching tarball of preloaded images
	I1009 18:46:22.524425  303283 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 18:46:22.526858  303283 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1009 18:46:22.526880  303283 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1009 18:46:22.626659  303283 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-041328 host does not exist
	  To start a cluster, run: "minikube start -p download-only-041328"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-041328
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-405051 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-405051 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.651314233s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1009 18:46:38.961062  303278 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1009 18:46:38.961118  303278 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-405051
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-405051: exit status 85 (82.635731ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-041328 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | -p download-only-041328        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| delete  | -p download-only-041328        | download-only-041328 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| start   | -o=json --download-only        | download-only-405051 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | -p download-only-405051        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:46:32
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:46:32.356426  303483 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:46:32.356540  303483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:32.356548  303483 out.go:358] Setting ErrFile to fd 2...
	I1009 18:46:32.356553  303483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:32.356783  303483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
	I1009 18:46:32.357196  303483 out.go:352] Setting JSON to true
	I1009 18:46:32.358054  303483 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8940,"bootTime":1728490653,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 18:46:32.358127  303483 start.go:139] virtualization:  
	I1009 18:46:32.361144  303483 out.go:97] [download-only-405051] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1009 18:46:32.361354  303483 notify.go:220] Checking for updates...
	I1009 18:46:32.363688  303483 out.go:169] MINIKUBE_LOCATION=19780
	I1009 18:46:32.366223  303483 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:46:32.369230  303483 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig
	I1009 18:46:32.372046  303483 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube
	I1009 18:46:32.374325  303483 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1009 18:46:32.378488  303483 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:46:32.378744  303483 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:46:32.398916  303483 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 18:46:32.399041  303483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:32.462500  303483 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-09 18:46:32.453060687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:32.462615  303483 docker.go:318] overlay module found
	I1009 18:46:32.465057  303483 out.go:97] Using the docker driver based on user configuration
	I1009 18:46:32.465089  303483 start.go:297] selected driver: docker
	I1009 18:46:32.465097  303483 start.go:901] validating driver "docker" against <nil>
	I1009 18:46:32.465223  303483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:32.511251  303483 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-09 18:46:32.501783279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:32.511459  303483 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:46:32.511753  303483 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1009 18:46:32.511957  303483 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:46:32.514328  303483 out.go:169] Using Docker driver with root privileges
	I1009 18:46:32.516604  303483 cni.go:84] Creating CNI manager for ""
	I1009 18:46:32.516666  303483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:46:32.516680  303483 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:46:32.516767  303483 start.go:340] cluster config:
	{Name:download-only-405051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-405051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:46:32.519018  303483 out.go:97] Starting "download-only-405051" primary control-plane node in "download-only-405051" cluster
	I1009 18:46:32.519036  303483 cache.go:121] Beginning downloading kic base image for docker with crio
	I1009 18:46:32.521657  303483 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1009 18:46:32.521685  303483 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:46:32.521855  303483 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1009 18:46:32.537002  303483 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1009 18:46:32.537148  303483 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1009 18:46:32.537178  303483 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1009 18:46:32.537189  303483 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1009 18:46:32.537201  303483 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1009 18:46:32.574246  303483 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1009 18:46:32.574295  303483 cache.go:56] Caching tarball of preloaded images
	I1009 18:46:32.574455  303483 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:46:32.577234  303483 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1009 18:46:32.577264  303483 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1009 18:46:32.665688  303483 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:8285fc512c7462f100de137f91fcd0ae -> /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1009 18:46:37.352730  303483 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1009 18:46:37.352839  303483 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19780-297764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-405051 host does not exist
	  To start a cluster, run: "minikube start -p download-only-405051"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-405051
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I1009 18:46:40.254930  303278 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-492289 --alsologtostderr --binary-mirror http://127.0.0.1:35183 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-492289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-492289
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-527950
addons_test.go:935: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-527950: exit status 85 (84.422083ms)

                                                
                                                
-- stdout --
	* Profile "addons-527950" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-527950"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-527950
addons_test.go:946: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-527950: exit status 85 (71.365337ms)

                                                
                                                
-- stdout --
	* Profile "addons-527950" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-527950"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (230.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-527950 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-527950 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m50.543080951s)
--- PASS: TestAddons/Setup (230.54s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-527950 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-527950 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 8.963796ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-dqnph" [63b4033a-0f05-44d5-becd-204fc75b1b5c] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004202769s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-l7mmn" [2399aceb-9c2c-40ca-9f5f-edd537c9676d] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004155989s
addons_test.go:331: (dbg) Run:  kubectl --context addons-527950 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-527950 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-527950 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.727587154s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 ip
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9bwkd" [84b00478-6d37-4c9d-a24d-9576a1f7a359] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004643129s
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-527950 addons disable inspektor-gadget --alsologtostderr -v=1: (5.7593741s)
--- PASS: TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1009 19:00:02.232543  303278 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1009 19:00:02.241682  303278 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1009 19:00:02.241723  303278 kapi.go:107] duration metric: took 9.194439ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.209143ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-527950 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-527950 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [44177b96-d1dd-4cce-abc8-8527a1dfa7f3] Pending
helpers_test.go:344: "task-pv-pod" [44177b96-d1dd-4cce-abc8-8527a1dfa7f3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [44177b96-d1dd-4cce-abc8-8527a1dfa7f3] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004352264s
addons_test.go:511: (dbg) Run:  kubectl --context addons-527950 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-527950 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-527950 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-527950 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-527950 delete pod task-pv-pod: (1.248182258s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-527950 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-527950 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-527950 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c6a75d8c-eeb8-44a9-8499-9df2cc0b6fb1] Pending
helpers_test.go:344: "task-pv-pod-restore" [c6a75d8c-eeb8-44a9-8499-9df2cc0b6fb1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c6a75d8c-eeb8-44a9-8499-9df2cc0b6fb1] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00338297s
addons_test.go:553: (dbg) Run:  kubectl --context addons-527950 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-527950 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-527950 delete volumesnapshot new-snapshot-demo
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-527950 addons disable volumesnapshots --alsologtostderr -v=1: (1.084610275s)
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-527950 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.765600958s)
--- PASS: TestAddons/parallel/CSI (63.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-527950 --alsologtostderr -v=1
addons_test.go:743: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-527950 --alsologtostderr -v=1: (1.030629953s)
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-sfpwj" [11083f45-59fc-4f00-9a12-5d7dd4fa6243] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-sfpwj" [11083f45-59fc-4f00-9a12-5d7dd4fa6243] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-sfpwj" [11083f45-59fc-4f00-9a12-5d7dd4fa6243] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003692051s
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 addons disable headlamp --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-527950 addons disable headlamp --alsologtostderr -v=1: (5.791148455s)
--- PASS: TestAddons/parallel/Headlamp (16.83s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-2vzg4" [c51dd59c-b84d-47ac-9768-af638b5bc1b4] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004831479s
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.85s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.83s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-527950 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-527950 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-527950 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [25b97f37-41d1-401e-9aa7-05e005b20250] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [25b97f37-41d1-401e-9aa7-05e005b20250] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [25b97f37-41d1-401e-9aa7-05e005b20250] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003510974s
addons_test.go:902: (dbg) Run:  kubectl --context addons-527950 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 ssh "cat /opt/local-path-provisioner/pvc-78e4294a-ee74-4947-a0a7-ae40d0f13e44_default_test-pvc/file1"
addons_test.go:923: (dbg) Run:  kubectl --context addons-527950 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-527950 delete pvc test-pvc
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-527950 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.561576118s)
--- PASS: TestAddons/parallel/LocalPath (53.83s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-frbq8" [b905a7fe-20fc-4877-8f83-6613af7e0f2b] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003549071s
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-s46j4" [e99aa51b-66ab-4a64-9a0c-b1ad3ebeda2e] Running
2024/10/09 18:59:01 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004155088s
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 addons disable yakd --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-527950 addons disable yakd --alsologtostderr -v=1: (6.001800229s)
--- PASS: TestAddons/parallel/Yakd (12.01s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-527950
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-527950: (11.850669944s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-527950
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-527950
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-527950
--- PASS: TestAddons/StoppedEnableDisable (12.13s)

                                                
                                    
x
+
TestCertOptions (37.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-363249 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-363249 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.457919972s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-363249 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-363249 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-363249 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-363249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-363249
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-363249: (2.008123137s)
--- PASS: TestCertOptions (37.14s)

                                                
                                    
x
+
TestCertExpiration (271.67s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-624016 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-624016 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (44.622640906s)
E1009 19:43:06.541021  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-624016 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1009 19:46:09.613088  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-624016 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (44.404979143s)
helpers_test.go:175: Cleaning up "cert-expiration-624016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-624016
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-624016: (2.642397949s)
--- PASS: TestCertExpiration (271.67s)

                                                
                                    
x
+
TestForceSystemdFlag (35.97s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-335591 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-335591 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.334143371s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-335591 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-335591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-335591
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-335591: (2.303473448s)
--- PASS: TestForceSystemdFlag (35.97s)

                                                
                                    
x
+
TestForceSystemdEnv (36.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-952372 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-952372 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.94411761s)
helpers_test.go:175: Cleaning up "force-systemd-env-952372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-952372
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-952372: (2.629036344s)
--- PASS: TestForceSystemdEnv (36.57s)

                                                
                                    
x
+
TestErrorSpam/setup (30.1s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-349899 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-349899 --driver=docker  --container-runtime=crio
E1009 19:05:32.271895  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:32.278239  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:32.289584  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:32.310958  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:32.352402  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:32.433803  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:32.595428  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:32.917042  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:33.559075  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:34.840459  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:37.401812  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:42.523780  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-349899 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-349899 --driver=docker  --container-runtime=crio: (30.098704784s)
--- PASS: TestErrorSpam/setup (30.10s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 unpause
--- PASS: TestErrorSpam/unpause (1.94s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 stop: (1.239522262s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-349899 --log_dir /tmp/nospam-349899 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19780-297764/.minikube/files/etc/test/nested/copy/303278/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-376335 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1009 19:06:13.247113  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-376335 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (50.770599196s)
--- PASS: TestFunctional/serial/StartWithProxy (50.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1009 19:06:46.410270  303278 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-376335 --alsologtostderr -v=8
E1009 19:06:54.208418  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-376335 --alsologtostderr -v=8: (27.856607658s)
functional_test.go:663: soft start took 27.864173108s for "functional-376335" cluster.
I1009 19:07:14.274259  303278 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (27.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-376335 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-376335 cache add registry.k8s.io/pause:3.1: (1.526736938s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-376335 cache add registry.k8s.io/pause:3.3: (1.563679962s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-376335 cache add registry.k8s.io/pause:latest: (1.260568092s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-376335 /tmp/TestFunctionalserialCacheCmdcacheadd_local1624505813/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 cache add minikube-local-cache-test:functional-376335
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 cache delete minikube-local-cache-test:functional-376335
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-376335
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376335 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (329.738075ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-376335 cache reload: (1.248920155s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 kubectl -- --context functional-376335 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-376335 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-376335 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-376335 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.376378564s)
functional_test.go:761: restart took 33.376738994s for "functional-376335" cluster.
I1009 19:07:56.638377  303278 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (33.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-376335 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-376335 logs: (1.742343344s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 logs --file /tmp/TestFunctionalserialLogsFileCmd5597916/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-376335 logs --file /tmp/TestFunctionalserialLogsFileCmd5597916/001/logs.txt: (2.234816122s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.24s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-376335 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-376335
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-376335: exit status 115 (378.013814ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30493 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-376335 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376335 config get cpus: exit status 14 (95.070316ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376335 config get cpus: exit status 14 (90.608806ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-376335 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-376335 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 337825: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-376335 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-376335 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (234.016949ms)

                                                
                                                
-- stdout --
	* [functional-376335] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:08:41.207072  336991 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:08:41.207217  336991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:08:41.207229  336991 out.go:358] Setting ErrFile to fd 2...
	I1009 19:08:41.207263  336991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:08:41.207673  336991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
	I1009 19:08:41.208762  336991 out.go:352] Setting JSON to false
	I1009 19:08:41.209905  336991 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10269,"bootTime":1728490653,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 19:08:41.209998  336991 start.go:139] virtualization:  
	I1009 19:08:41.212401  336991 out.go:177] * [functional-376335] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1009 19:08:41.214539  336991 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:08:41.214642  336991 notify.go:220] Checking for updates...
	I1009 19:08:41.218094  336991 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:08:41.219881  336991 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig
	I1009 19:08:41.221763  336991 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube
	I1009 19:08:41.223819  336991 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:08:41.226121  336991 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:08:41.230421  336991 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:08:41.231013  336991 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:08:41.274561  336991 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 19:08:41.274672  336991 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:08:41.346267  336991 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 19:08:41.334722033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 19:08:41.346380  336991 docker.go:318] overlay module found
	I1009 19:08:41.349339  336991 out.go:177] * Using the docker driver based on existing profile
	I1009 19:08:41.351435  336991 start.go:297] selected driver: docker
	I1009 19:08:41.351452  336991 start.go:901] validating driver "docker" against &{Name:functional-376335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-376335 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:08:41.351576  336991 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:08:41.354783  336991 out.go:201] 
	W1009 19:08:41.357909  336991 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 19:08:41.360484  336991 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-376335 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-376335 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-376335 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (245.295433ms)

                                                
                                                
-- stdout --
	* [functional-376335] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:08:40.965908  336895 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:08:40.966033  336895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:08:40.966041  336895 out.go:358] Setting ErrFile to fd 2...
	I1009 19:08:40.966046  336895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:08:40.966405  336895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
	I1009 19:08:40.967517  336895 out.go:352] Setting JSON to false
	I1009 19:08:40.968472  336895 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10268,"bootTime":1728490653,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 19:08:40.968540  336895 start.go:139] virtualization:  
	I1009 19:08:40.971477  336895 out.go:177] * [functional-376335] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1009 19:08:40.974037  336895 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:08:40.976443  336895 notify.go:220] Checking for updates...
	I1009 19:08:40.980409  336895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:08:40.983100  336895 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig
	I1009 19:08:40.985301  336895 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube
	I1009 19:08:40.987388  336895 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:08:40.989337  336895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:08:40.992005  336895 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:08:40.993159  336895 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:08:41.042022  336895 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 19:08:41.042156  336895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:08:41.112310  336895 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 19:08:41.102403838 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 19:08:41.112419  336895 docker.go:318] overlay module found
	I1009 19:08:41.116533  336895 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1009 19:08:41.119228  336895 start.go:297] selected driver: docker
	I1009 19:08:41.119249  336895 start.go:901] validating driver "docker" against &{Name:functional-376335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-376335 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:08:41.119362  336895 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:08:41.121675  336895 out.go:201] 
	W1009 19:08:41.124197  336895 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 19:08:41.126238  336895 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-376335 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-376335 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-bvvc2" [b6969d29-62a7-474e-bcf9-71a0ad9df698] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-bvvc2" [b6969d29-62a7-474e-bcf9-71a0ad9df698] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003359452s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30361
functional_test.go:1675: http://192.168.49.2:30361: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-bvvc2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30361
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3587f651-e8ca-4bdd-96c1-06f01c64840d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003670381s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-376335 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-376335 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-376335 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-376335 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [996e9077-7717-44b8-917f-23f808419963] Pending
helpers_test.go:344: "sp-pod" [996e9077-7717-44b8-917f-23f808419963] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1009 19:08:16.130703  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [996e9077-7717-44b8-917f-23f808419963] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004070503s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-376335 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-376335 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-376335 delete -f testdata/storage-provisioner/pod.yaml: (1.148008222s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-376335 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2eada988-0d4a-4201-9aa6-952f1331794d] Pending
helpers_test.go:344: "sp-pod" [2eada988-0d4a-4201-9aa6-952f1331794d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003711216s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-376335 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.10s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh -n functional-376335 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 cp functional-376335:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2018942784/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh -n functional-376335 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh -n functional-376335 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/303278/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "sudo cat /etc/test/nested/copy/303278/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/303278.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "sudo cat /etc/ssl/certs/303278.pem"
2024/10/09 19:08:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/303278.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "sudo cat /usr/share/ca-certificates/303278.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3032782.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "sudo cat /etc/ssl/certs/3032782.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3032782.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "sudo cat /usr/share/ca-certificates/3032782.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-376335 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376335 ssh "sudo systemctl is-active docker": exit status 1 (373.617486ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376335 ssh "sudo systemctl is-active containerd": exit status 1 (276.681023ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-376335 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-376335 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-376335 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-376335 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 334465: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-376335 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-376335 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e54b4f1d-d215-4a7f-9cf8-73ff2f1b3693] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e54b4f1d-d215-4a7f-9cf8-73ff2f1b3693] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004564327s
I1009 19:08:18.052737  303278 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-376335 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.7.103 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-376335 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-376335 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-376335 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-7tz2x" [534536bd-58b1-4527-8bb8-3b4750209baf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-7tz2x" [534536bd-58b1-4527-8bb8-3b4750209baf] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.00359029s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "364.902315ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "59.249075ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "363.055273ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "54.550968ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-376335 /tmp/TestFunctionalparallelMountCmdany-port2918886153/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728500913400879670" to /tmp/TestFunctionalparallelMountCmdany-port2918886153/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728500913400879670" to /tmp/TestFunctionalparallelMountCmdany-port2918886153/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728500913400879670" to /tmp/TestFunctionalparallelMountCmdany-port2918886153/001/test-1728500913400879670
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376335 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (321.579535ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 19:08:33.722715  303278 retry.go:31] will retry after 377.462334ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 19:08 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 19:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 19:08 test-1728500913400879670
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh cat /mount-9p/test-1728500913400879670
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-376335 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [95fc17cf-19c0-4db1-b56e-86fe5678472b] Pending
helpers_test.go:344: "busybox-mount" [95fc17cf-19c0-4db1-b56e-86fe5678472b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [95fc17cf-19c0-4db1-b56e-86fe5678472b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [95fc17cf-19c0-4db1-b56e-86fe5678472b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004545645s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-376335 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-376335 /tmp/TestFunctionalparallelMountCmdany-port2918886153/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 service list -o json
functional_test.go:1494: Took "516.507181ms" to run "out/minikube-linux-arm64 -p functional-376335 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31302
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31302
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-376335 /tmp/TestFunctionalparallelMountCmdspecific-port1642283223/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-376335 /tmp/TestFunctionalparallelMountCmdspecific-port1642283223/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376335 ssh "sudo umount -f /mount-9p": exit status 1 (339.400096ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-376335 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-376335 /tmp/TestFunctionalparallelMountCmdspecific-port1642283223/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-376335 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3826597742/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-376335 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3826597742/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-376335 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3826597742/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-376335 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-376335 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3826597742/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-376335 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3826597742/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-376335 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3826597742/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-376335 version -o=json --components: (1.132947386s)
--- PASS: TestFunctional/parallel/Version/components (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-376335 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-376335
localhost/kicbase/echo-server:functional-376335
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-376335 image ls --format short --alsologtostderr:
I1009 19:08:57.019981  339277 out.go:345] Setting OutFile to fd 1 ...
I1009 19:08:57.020175  339277 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:08:57.020185  339277 out.go:358] Setting ErrFile to fd 2...
I1009 19:08:57.020190  339277 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:08:57.020714  339277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
I1009 19:08:57.021445  339277 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:08:57.021581  339277 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:08:57.022451  339277 cli_runner.go:164] Run: docker container inspect functional-376335 --format={{.State.Status}}
I1009 19:08:57.044391  339277 ssh_runner.go:195] Run: systemctl --version
I1009 19:08:57.044452  339277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-376335
I1009 19:08:57.080408  339277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/functional-376335/id_rsa Username:docker}
I1009 19:08:57.172424  339277 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-376335 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/minikube-local-cache-test     | functional-376335  | 3cae3bc2b1fc5 | 3.33kB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/library/nginx                 | alpine             | 577a23b5858b9 | 52.3MB |
| docker.io/library/nginx                 | latest             | 048e090385966 | 201MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 0bcd66b03df5f | 98.3MB |
| localhost/kicbase/echo-server           | functional-376335  | ce2d2cda2d858 | 4.79MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-376335 image ls --format table --alsologtostderr:
I1009 19:08:57.866016  339511 out.go:345] Setting OutFile to fd 1 ...
I1009 19:08:57.866162  339511 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:08:57.866182  339511 out.go:358] Setting ErrFile to fd 2...
I1009 19:08:57.866187  339511 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:08:57.866445  339511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
I1009 19:08:57.867295  339511 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:08:57.867488  339511 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:08:57.868095  339511 cli_runner.go:164] Run: docker container inspect functional-376335 --format={{.State.Status}}
I1009 19:08:57.886155  339511 ssh_runner.go:195] Run: systemctl --version
I1009 19:08:57.886209  339511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-376335
I1009 19:08:57.917497  339511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/functional-376335/id_rsa Username:docker}
I1009 19:08:58.015182  339511 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-376335 image ls --format json --alsologtostderr:
[{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250","docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478"],"repoTags":["docker.io/library/nginx:alpine"],"size":"52254450"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["re
gistry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"2475
62353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-376335"],"size":"4788229"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"7f8aa378bb47dffcf430f3a601a
be39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":["docker.io/libr
ary/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"200984127"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"d3f53a98c0a9d9163c4848bcf34
b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"98291250"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"3cae3bc2b1fc53345ac02205cbda94075b1017a270fe1170fcaf82d4d0c8ceac","repoDigests":["localhost/minikube-local-cache-test@sha256:b627801ac262800e1c73a110b0f5c556553db23c2709ee526b581fef3520ef97"],"repoTags":["localhost/minikube-local-cache-test:functional-376335"],"size":"3330"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-376335 image ls --format json --alsologtostderr:
I1009 19:08:57.568157  339433 out.go:345] Setting OutFile to fd 1 ...
I1009 19:08:57.568313  339433 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:08:57.568322  339433 out.go:358] Setting ErrFile to fd 2...
I1009 19:08:57.568328  339433 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:08:57.568563  339433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
I1009 19:08:57.569180  339433 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:08:57.569305  339433 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:08:57.569781  339433 cli_runner.go:164] Run: docker container inspect functional-376335 --format={{.State.Status}}
I1009 19:08:57.602694  339433 ssh_runner.go:195] Run: systemctl --version
I1009 19:08:57.602762  339433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-376335
I1009 19:08:57.620316  339433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/functional-376335/id_rsa Username:docker}
I1009 19:08:57.713970  339433 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-376335 image ls --format yaml --alsologtostderr:
- id: 0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "98291250"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
- docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478
repoTags:
- docker.io/library/nginx:alpine
size: "52254450"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: 048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "200984127"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-376335
size: "4788229"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 3cae3bc2b1fc53345ac02205cbda94075b1017a270fe1170fcaf82d4d0c8ceac
repoDigests:
- localhost/minikube-local-cache-test@sha256:b627801ac262800e1c73a110b0f5c556553db23c2709ee526b581fef3520ef97
repoTags:
- localhost/minikube-local-cache-test:functional-376335
size: "3330"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-376335 image ls --format yaml --alsologtostderr:
I1009 19:08:57.285067  339366 out.go:345] Setting OutFile to fd 1 ...
I1009 19:08:57.285260  339366 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:08:57.285289  339366 out.go:358] Setting ErrFile to fd 2...
I1009 19:08:57.285313  339366 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:08:57.285596  339366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
I1009 19:08:57.286246  339366 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:08:57.286471  339366 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:08:57.287048  339366 cli_runner.go:164] Run: docker container inspect functional-376335 --format={{.State.Status}}
I1009 19:08:57.320443  339366 ssh_runner.go:195] Run: systemctl --version
I1009 19:08:57.320524  339366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-376335
I1009 19:08:57.346810  339366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/functional-376335/id_rsa Username:docker}
I1009 19:08:57.440313  339366 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-376335 ssh pgrep buildkitd: exit status 1 (355.697229ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image build -t localhost/my-image:functional-376335 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-376335 image build -t localhost/my-image:functional-376335 testdata/build --alsologtostderr: (3.458399721s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-376335 image build -t localhost/my-image:functional-376335 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 81bc127ec59
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-376335
--> 713874f1417
Successfully tagged localhost/my-image:functional-376335
713874f14172f4722c1a4a7e8be0b8f1f325102cce7c19c99750972eccf86651
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-376335 image build -t localhost/my-image:functional-376335 testdata/build --alsologtostderr:
I1009 19:08:57.760320  339484 out.go:345] Setting OutFile to fd 1 ...
I1009 19:08:57.761158  339484 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:08:57.761187  339484 out.go:358] Setting ErrFile to fd 2...
I1009 19:08:57.761193  339484 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:08:57.761617  339484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
I1009 19:08:57.762801  339484 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:08:57.763469  339484 config.go:182] Loaded profile config "functional-376335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:08:57.764536  339484 cli_runner.go:164] Run: docker container inspect functional-376335 --format={{.State.Status}}
I1009 19:08:57.790745  339484 ssh_runner.go:195] Run: systemctl --version
I1009 19:08:57.790811  339484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-376335
I1009 19:08:57.811576  339484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/functional-376335/id_rsa Username:docker}
I1009 19:08:57.905392  339484 build_images.go:161] Building image from path: /tmp/build.989800032.tar
I1009 19:08:57.905459  339484 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 19:08:57.922000  339484 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.989800032.tar
I1009 19:08:57.927980  339484 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.989800032.tar: stat -c "%s %y" /var/lib/minikube/build/build.989800032.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.989800032.tar': No such file or directory
I1009 19:08:57.928034  339484 ssh_runner.go:362] scp /tmp/build.989800032.tar --> /var/lib/minikube/build/build.989800032.tar (3072 bytes)
I1009 19:08:57.971155  339484 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.989800032
I1009 19:08:57.980556  339484 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.989800032 -xf /var/lib/minikube/build/build.989800032.tar
I1009 19:08:57.990199  339484 crio.go:315] Building image: /var/lib/minikube/build/build.989800032
I1009 19:08:57.990275  339484 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-376335 /var/lib/minikube/build/build.989800032 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1009 19:09:01.109106  339484 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-376335 /var/lib/minikube/build/build.989800032 --cgroup-manager=cgroupfs: (3.11880169s)
I1009 19:09:01.109176  339484 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.989800032
I1009 19:09:01.119992  339484 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.989800032.tar
I1009 19:09:01.129340  339484 build_images.go:217] Built localhost/my-image:functional-376335 from /tmp/build.989800032.tar
I1009 19:09:01.129373  339484 build_images.go:133] succeeded building to: functional-376335
I1009 19:09:01.129379  339484 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-376335
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image load --daemon kicbase/echo-server:functional-376335 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-376335 image load --daemon kicbase/echo-server:functional-376335 --alsologtostderr: (1.213913199s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image load --daemon kicbase/echo-server:functional-376335 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-376335
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image load --daemon kicbase/echo-server:functional-376335 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-376335 image load --daemon kicbase/echo-server:functional-376335 --alsologtostderr: (2.452522213s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image save kicbase/echo-server:functional-376335 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image rm kicbase/echo-server:functional-376335 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-376335
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 image save --daemon kicbase/echo-server:functional-376335 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-376335
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-376335 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-376335
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-376335
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-376335
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (181.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-179798 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1009 19:10:32.271908  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:59.972578  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-179798 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (3m0.669108159s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (181.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-179798 -- rollout status deployment/busybox: (5.316858221s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-kn4x5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-rzq92 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-vxknp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-kn4x5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-rzq92 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-vxknp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-kn4x5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-rzq92 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-vxknp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-kn4x5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-kn4x5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-rzq92 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-rzq92 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-vxknp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-179798 -- exec busybox-7dff88458-vxknp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (34.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-179798 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-179798 -v=7 --alsologtostderr: (33.340670443s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (34.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-179798 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.041552979s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp testdata/cp-test.txt ha-179798:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2512906403/001/cp-test_ha-179798.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798:/home/docker/cp-test.txt ha-179798-m02:/home/docker/cp-test_ha-179798_ha-179798-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m02 "sudo cat /home/docker/cp-test_ha-179798_ha-179798-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798:/home/docker/cp-test.txt ha-179798-m03:/home/docker/cp-test_ha-179798_ha-179798-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m03 "sudo cat /home/docker/cp-test_ha-179798_ha-179798-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798:/home/docker/cp-test.txt ha-179798-m04:/home/docker/cp-test_ha-179798_ha-179798-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m04 "sudo cat /home/docker/cp-test_ha-179798_ha-179798-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp testdata/cp-test.txt ha-179798-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2512906403/001/cp-test_ha-179798-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798-m02:/home/docker/cp-test.txt ha-179798:/home/docker/cp-test_ha-179798-m02_ha-179798.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798 "sudo cat /home/docker/cp-test_ha-179798-m02_ha-179798.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798-m02:/home/docker/cp-test.txt ha-179798-m03:/home/docker/cp-test_ha-179798-m02_ha-179798-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m03 "sudo cat /home/docker/cp-test_ha-179798-m02_ha-179798-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798-m02:/home/docker/cp-test.txt ha-179798-m04:/home/docker/cp-test_ha-179798-m02_ha-179798-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m04 "sudo cat /home/docker/cp-test_ha-179798-m02_ha-179798-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp testdata/cp-test.txt ha-179798-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2512906403/001/cp-test_ha-179798-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798-m03:/home/docker/cp-test.txt ha-179798:/home/docker/cp-test_ha-179798-m03_ha-179798.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798 "sudo cat /home/docker/cp-test_ha-179798-m03_ha-179798.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798-m03:/home/docker/cp-test.txt ha-179798-m02:/home/docker/cp-test_ha-179798-m03_ha-179798-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m02 "sudo cat /home/docker/cp-test_ha-179798-m03_ha-179798-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798-m03:/home/docker/cp-test.txt ha-179798-m04:/home/docker/cp-test_ha-179798-m03_ha-179798-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m04 "sudo cat /home/docker/cp-test_ha-179798-m03_ha-179798-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp testdata/cp-test.txt ha-179798-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2512906403/001/cp-test_ha-179798-m04.txt
E1009 19:13:06.540627  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:13:06.549112  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:13:06.560478  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:13:06.581852  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m04 "sudo cat /home/docker/cp-test.txt"
E1009 19:13:06.623084  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:13:06.704545  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:13:06.866018  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798-m04:/home/docker/cp-test.txt ha-179798:/home/docker/cp-test_ha-179798-m04_ha-179798.txt
E1009 19:13:07.187758  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798 "sudo cat /home/docker/cp-test_ha-179798-m04_ha-179798.txt"
E1009 19:13:07.829935  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798-m04:/home/docker/cp-test.txt ha-179798-m02:/home/docker/cp-test_ha-179798-m04_ha-179798-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m02 "sudo cat /home/docker/cp-test_ha-179798-m04_ha-179798-m02.txt"
E1009 19:13:09.113541  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 cp ha-179798-m04:/home/docker/cp-test.txt ha-179798-m03:/home/docker/cp-test_ha-179798-m04_ha-179798-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 ssh -n ha-179798-m03 "sudo cat /home/docker/cp-test_ha-179798-m04_ha-179798-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 node stop m02 -v=7 --alsologtostderr
E1009 19:13:11.676010  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:13:16.797739  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-179798 node stop m02 -v=7 --alsologtostderr: (11.9917509s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-179798 status -v=7 --alsologtostderr: exit status 7 (724.949101ms)

                                                
                                                
-- stdout --
	ha-179798
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-179798-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-179798-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-179798-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:13:22.252355  355385 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:13:22.252559  355385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:13:22.252589  355385 out.go:358] Setting ErrFile to fd 2...
	I1009 19:13:22.252610  355385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:13:22.252878  355385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
	I1009 19:13:22.253145  355385 out.go:352] Setting JSON to false
	I1009 19:13:22.253205  355385 mustload.go:65] Loading cluster: ha-179798
	I1009 19:13:22.253296  355385 notify.go:220] Checking for updates...
	I1009 19:13:22.253665  355385 config.go:182] Loaded profile config "ha-179798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:22.253704  355385 status.go:174] checking status of ha-179798 ...
	I1009 19:13:22.254309  355385 cli_runner.go:164] Run: docker container inspect ha-179798 --format={{.State.Status}}
	I1009 19:13:22.276674  355385 status.go:371] ha-179798 host status = "Running" (err=<nil>)
	I1009 19:13:22.276701  355385 host.go:66] Checking if "ha-179798" exists ...
	I1009 19:13:22.277023  355385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-179798
	I1009 19:13:22.306231  355385 host.go:66] Checking if "ha-179798" exists ...
	I1009 19:13:22.306644  355385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:13:22.306714  355385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-179798
	I1009 19:13:22.326401  355385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/ha-179798/id_rsa Username:docker}
	I1009 19:13:22.421465  355385 ssh_runner.go:195] Run: systemctl --version
	I1009 19:13:22.426261  355385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:13:22.438796  355385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:13:22.494747  355385 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-09 19:13:22.481479317 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 19:13:22.495390  355385 kubeconfig.go:125] found "ha-179798" server: "https://192.168.49.254:8443"
	I1009 19:13:22.495437  355385 api_server.go:166] Checking apiserver status ...
	I1009 19:13:22.495487  355385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:22.507669  355385 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1417/cgroup
	I1009 19:13:22.518626  355385 api_server.go:182] apiserver freezer: "7:freezer:/docker/d7e0c91c40c67d28d9e2ffcef4de114d95d7c2dfe0cbabbabb16d72b458e5c4e/crio/crio-1c19eeb20ee1945349b8531e117a42f85d8072e2ca96d7303fa39d34213f25a5"
	I1009 19:13:22.518703  355385 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d7e0c91c40c67d28d9e2ffcef4de114d95d7c2dfe0cbabbabb16d72b458e5c4e/crio/crio-1c19eeb20ee1945349b8531e117a42f85d8072e2ca96d7303fa39d34213f25a5/freezer.state
	I1009 19:13:22.528125  355385 api_server.go:204] freezer state: "THAWED"
	I1009 19:13:22.528157  355385 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 19:13:22.537602  355385 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 19:13:22.537632  355385 status.go:463] ha-179798 apiserver status = Running (err=<nil>)
	I1009 19:13:22.537645  355385 status.go:176] ha-179798 status: &{Name:ha-179798 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:13:22.537662  355385 status.go:174] checking status of ha-179798-m02 ...
	I1009 19:13:22.537982  355385 cli_runner.go:164] Run: docker container inspect ha-179798-m02 --format={{.State.Status}}
	I1009 19:13:22.563537  355385 status.go:371] ha-179798-m02 host status = "Stopped" (err=<nil>)
	I1009 19:13:22.563560  355385 status.go:384] host is not running, skipping remaining checks
	I1009 19:13:22.563568  355385 status.go:176] ha-179798-m02 status: &{Name:ha-179798-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:13:22.563599  355385 status.go:174] checking status of ha-179798-m03 ...
	I1009 19:13:22.564046  355385 cli_runner.go:164] Run: docker container inspect ha-179798-m03 --format={{.State.Status}}
	I1009 19:13:22.583096  355385 status.go:371] ha-179798-m03 host status = "Running" (err=<nil>)
	I1009 19:13:22.583123  355385 host.go:66] Checking if "ha-179798-m03" exists ...
	I1009 19:13:22.583505  355385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-179798-m03
	I1009 19:13:22.599981  355385 host.go:66] Checking if "ha-179798-m03" exists ...
	I1009 19:13:22.600307  355385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:13:22.600354  355385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-179798-m03
	I1009 19:13:22.618164  355385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/ha-179798-m03/id_rsa Username:docker}
	I1009 19:13:22.710143  355385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:13:22.722831  355385 kubeconfig.go:125] found "ha-179798" server: "https://192.168.49.254:8443"
	I1009 19:13:22.722867  355385 api_server.go:166] Checking apiserver status ...
	I1009 19:13:22.722908  355385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:22.734316  355385 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1357/cgroup
	I1009 19:13:22.745902  355385 api_server.go:182] apiserver freezer: "7:freezer:/docker/d3137d91228a30a15abb5b0c4bf77075cfb0686906a081cfc86a9b953169bfd8/crio/crio-b00d652830994849229d0144c31e77f630c5f81785c15a5bb86abee87f690c87"
	I1009 19:13:22.745964  355385 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d3137d91228a30a15abb5b0c4bf77075cfb0686906a081cfc86a9b953169bfd8/crio/crio-b00d652830994849229d0144c31e77f630c5f81785c15a5bb86abee87f690c87/freezer.state
	I1009 19:13:22.755409  355385 api_server.go:204] freezer state: "THAWED"
	I1009 19:13:22.755438  355385 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 19:13:22.763401  355385 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 19:13:22.763470  355385 status.go:463] ha-179798-m03 apiserver status = Running (err=<nil>)
	I1009 19:13:22.763494  355385 status.go:176] ha-179798-m03 status: &{Name:ha-179798-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:13:22.763523  355385 status.go:174] checking status of ha-179798-m04 ...
	I1009 19:13:22.763941  355385 cli_runner.go:164] Run: docker container inspect ha-179798-m04 --format={{.State.Status}}
	I1009 19:13:22.780476  355385 status.go:371] ha-179798-m04 host status = "Running" (err=<nil>)
	I1009 19:13:22.780503  355385 host.go:66] Checking if "ha-179798-m04" exists ...
	I1009 19:13:22.780812  355385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-179798-m04
	I1009 19:13:22.798283  355385 host.go:66] Checking if "ha-179798-m04" exists ...
	I1009 19:13:22.798660  355385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:13:22.798725  355385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-179798-m04
	I1009 19:13:22.816645  355385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/ha-179798-m04/id_rsa Username:docker}
	I1009 19:13:22.909290  355385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:13:22.925448  355385 status.go:176] ha-179798-m04 status: &{Name:ha-179798-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 node start m02 -v=7 --alsologtostderr
E1009 19:13:27.039763  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-179798 node start m02 -v=7 --alsologtostderr: (23.351956942s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 status -v=7 --alsologtostderr
E1009 19:13:47.521323  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-179798 status -v=7 --alsologtostderr: (1.587084943s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.266237769s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (205.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-179798 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-179798 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-179798 -v=7 --alsologtostderr: (37.398910831s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-179798 --wait=true -v=7 --alsologtostderr
E1009 19:14:28.483433  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:15:32.272010  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:15:50.405978  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-179798 --wait=true -v=7 --alsologtostderr: (2m47.712548804s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-179798
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (205.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-179798 node delete m03 -v=7 --alsologtostderr: (11.559175836s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-179798 stop -v=7 --alsologtostderr: (35.924194529s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-179798 status -v=7 --alsologtostderr: exit status 7 (120.445201ms)

                                                
                                                
-- stdout --
	ha-179798
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-179798-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-179798-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:18:04.595503  369950 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:18:04.595648  369950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:18:04.595659  369950 out.go:358] Setting ErrFile to fd 2...
	I1009 19:18:04.595665  369950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:18:04.595955  369950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
	I1009 19:18:04.596154  369950 out.go:352] Setting JSON to false
	I1009 19:18:04.596189  369950 mustload.go:65] Loading cluster: ha-179798
	I1009 19:18:04.596282  369950 notify.go:220] Checking for updates...
	I1009 19:18:04.596612  369950 config.go:182] Loaded profile config "ha-179798": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:18:04.596635  369950 status.go:174] checking status of ha-179798 ...
	I1009 19:18:04.597159  369950 cli_runner.go:164] Run: docker container inspect ha-179798 --format={{.State.Status}}
	I1009 19:18:04.615310  369950 status.go:371] ha-179798 host status = "Stopped" (err=<nil>)
	I1009 19:18:04.615336  369950 status.go:384] host is not running, skipping remaining checks
	I1009 19:18:04.615343  369950 status.go:176] ha-179798 status: &{Name:ha-179798 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:18:04.615376  369950 status.go:174] checking status of ha-179798-m02 ...
	I1009 19:18:04.615733  369950 cli_runner.go:164] Run: docker container inspect ha-179798-m02 --format={{.State.Status}}
	I1009 19:18:04.633753  369950 status.go:371] ha-179798-m02 host status = "Stopped" (err=<nil>)
	I1009 19:18:04.633779  369950 status.go:384] host is not running, skipping remaining checks
	I1009 19:18:04.633787  369950 status.go:176] ha-179798-m02 status: &{Name:ha-179798-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:18:04.633807  369950 status.go:174] checking status of ha-179798-m04 ...
	I1009 19:18:04.634119  369950 cli_runner.go:164] Run: docker container inspect ha-179798-m04 --format={{.State.Status}}
	I1009 19:18:04.655961  369950 status.go:371] ha-179798-m04 host status = "Stopped" (err=<nil>)
	I1009 19:18:04.655990  369950 status.go:384] host is not running, skipping remaining checks
	I1009 19:18:04.655997  369950 status.go:176] ha-179798-m04 status: &{Name:ha-179798-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (64.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-179798 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1009 19:18:06.540862  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:18:34.248670  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-179798 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m3.920640871s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (64.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-179798 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-179798 --control-plane -v=7 --alsologtostderr: (1m14.042134254s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-179798 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-179798 status -v=7 --alsologtostderr: (1.000321354s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.048935222s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.2s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-277710 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-277710 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (50.192712731s)
--- PASS: TestJSONOutput/start/Command (50.20s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-277710 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-277710 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-277710 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-277710 --output=json --user=testUser: (5.871381919s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-100561 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-100561 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.330107ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"44d1c1d9-79f2-4eb4-a123-1a9cfa63a088","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-100561] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f7cfa2df-b5a9-4c20-b642-b250f008a8f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19780"}}
	{"specversion":"1.0","id":"2b509d84-14d2-4cd9-878c-f5faf77fd331","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c49a2543-1186-4688-b020-b76408a2e6e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig"}}
	{"specversion":"1.0","id":"1f3b5de4-a8f3-4637-a5d6-c942af2a0576","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube"}}
	{"specversion":"1.0","id":"d3daa343-de19-4f94-95b0-f26679c8e13a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6351911e-f8d2-4bc4-8a50-d203dec1789d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a81f18ec-6141-4c85-9651-d569e6132abf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-100561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-100561
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-242320 --network=
E1009 19:21:55.334098  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-242320 --network=: (35.582955907s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-242320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-242320
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-242320: (1.739243055s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.35s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-697011 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-697011 --network=bridge: (32.047963793s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-697011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-697011
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-697011: (1.927498672s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.00s)

                                                
                                    
x
+
TestKicExistingNetwork (32.19s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1009 19:22:47.711472  303278 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1009 19:22:47.730729  303278 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1009 19:22:47.730823  303278 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1009 19:22:47.730853  303278 cli_runner.go:164] Run: docker network inspect existing-network
W1009 19:22:47.745964  303278 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1009 19:22:47.746003  303278 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1009 19:22:47.746024  303278 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1009 19:22:47.746158  303278 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1009 19:22:47.766760  303278 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e863964551dc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:b8:80:df:37} reservation:<nil>}
I1009 19:22:47.767260  303278 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d37360}
I1009 19:22:47.767326  303278 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1009 19:22:47.767384  303278 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1009 19:22:47.842454  303278 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-486124 --network=existing-network
E1009 19:23:06.540855  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-486124 --network=existing-network: (30.041782737s)
helpers_test.go:175: Cleaning up "existing-network-486124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-486124
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-486124: (1.979853404s)
I1009 19:23:19.879775  303278 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.19s)

                                                
                                    
x
+
TestKicCustomSubnet (31.82s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-868411 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-868411 --subnet=192.168.60.0/24: (29.752957147s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-868411 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-868411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-868411
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-868411: (2.04049002s)
--- PASS: TestKicCustomSubnet (31.82s)

                                                
                                    
x
+
TestKicStaticIP (35.7s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-620399 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-620399 --static-ip=192.168.200.200: (33.528517885s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-620399 ip
helpers_test.go:175: Cleaning up "static-ip-620399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-620399
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-620399: (2.018017229s)
--- PASS: TestKicStaticIP (35.70s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (70.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-836251 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-836251 --driver=docker  --container-runtime=crio: (30.657798641s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-838967 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-838967 --driver=docker  --container-runtime=crio: (33.761871311s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-836251
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
E1009 19:25:32.271724  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-838967
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-838967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-838967
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-838967: (1.944369309s)
helpers_test.go:175: Cleaning up "first-836251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-836251
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-836251: (2.284659228s)
--- PASS: TestMinikubeProfile (70.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-912319 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-912319 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.059900602s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-912319 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-914526 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-914526 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.801361448s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-914526 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-912319 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-912319 --alsologtostderr -v=5: (1.657326051s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-914526 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-914526
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-914526: (1.210733386s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.72s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-914526
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-914526: (7.717944992s)
--- PASS: TestMountStart/serial/RestartStopped (8.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-914526 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-838655 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-838655 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.967564268s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-838655 -- rollout status deployment/busybox: (5.530553166s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- exec busybox-7dff88458-gnrg7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- exec busybox-7dff88458-xhzbv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- exec busybox-7dff88458-gnrg7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- exec busybox-7dff88458-xhzbv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- exec busybox-7dff88458-gnrg7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- exec busybox-7dff88458-xhzbv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- exec busybox-7dff88458-gnrg7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- exec busybox-7dff88458-gnrg7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- exec busybox-7dff88458-xhzbv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-838655 -- exec busybox-7dff88458-xhzbv -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-838655 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-838655 -v 3 --alsologtostderr: (29.22341792s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.91s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-838655 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 cp testdata/cp-test.txt multinode-838655:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 cp multinode-838655:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile413920504/001/cp-test_multinode-838655.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 cp multinode-838655:/home/docker/cp-test.txt multinode-838655-m02:/home/docker/cp-test_multinode-838655_multinode-838655-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655-m02 "sudo cat /home/docker/cp-test_multinode-838655_multinode-838655-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 cp multinode-838655:/home/docker/cp-test.txt multinode-838655-m03:/home/docker/cp-test_multinode-838655_multinode-838655-m03.txt
E1009 19:28:06.540999  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655-m03 "sudo cat /home/docker/cp-test_multinode-838655_multinode-838655-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 cp testdata/cp-test.txt multinode-838655-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 cp multinode-838655-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile413920504/001/cp-test_multinode-838655-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 cp multinode-838655-m02:/home/docker/cp-test.txt multinode-838655:/home/docker/cp-test_multinode-838655-m02_multinode-838655.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655 "sudo cat /home/docker/cp-test_multinode-838655-m02_multinode-838655.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 cp multinode-838655-m02:/home/docker/cp-test.txt multinode-838655-m03:/home/docker/cp-test_multinode-838655-m02_multinode-838655-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655-m03 "sudo cat /home/docker/cp-test_multinode-838655-m02_multinode-838655-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 cp testdata/cp-test.txt multinode-838655-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 cp multinode-838655-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile413920504/001/cp-test_multinode-838655-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 cp multinode-838655-m03:/home/docker/cp-test.txt multinode-838655:/home/docker/cp-test_multinode-838655-m03_multinode-838655.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655 "sudo cat /home/docker/cp-test_multinode-838655-m03_multinode-838655.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 cp multinode-838655-m03:/home/docker/cp-test.txt multinode-838655-m02:/home/docker/cp-test_multinode-838655-m03_multinode-838655-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 ssh -n multinode-838655-m02 "sudo cat /home/docker/cp-test_multinode-838655-m03_multinode-838655-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-838655 node stop m03: (1.218295394s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-838655 status: exit status 7 (549.625817ms)

                                                
                                                
-- stdout --
	multinode-838655
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-838655-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-838655-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-838655 status --alsologtostderr: exit status 7 (522.124148ms)

                                                
                                                
-- stdout --
	multinode-838655
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-838655-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-838655-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:28:15.251173  423202 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:28:15.251298  423202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:28:15.251303  423202 out.go:358] Setting ErrFile to fd 2...
	I1009 19:28:15.251316  423202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:28:15.251659  423202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
	I1009 19:28:15.251905  423202 out.go:352] Setting JSON to false
	I1009 19:28:15.251939  423202 mustload.go:65] Loading cluster: multinode-838655
	I1009 19:28:15.252640  423202 config.go:182] Loaded profile config "multinode-838655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:28:15.252658  423202 status.go:174] checking status of multinode-838655 ...
	I1009 19:28:15.253469  423202 cli_runner.go:164] Run: docker container inspect multinode-838655 --format={{.State.Status}}
	I1009 19:28:15.257391  423202 notify.go:220] Checking for updates...
	I1009 19:28:15.275267  423202 status.go:371] multinode-838655 host status = "Running" (err=<nil>)
	I1009 19:28:15.275304  423202 host.go:66] Checking if "multinode-838655" exists ...
	I1009 19:28:15.275621  423202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-838655
	I1009 19:28:15.312741  423202 host.go:66] Checking if "multinode-838655" exists ...
	I1009 19:28:15.313043  423202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:28:15.313151  423202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-838655
	I1009 19:28:15.331698  423202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/multinode-838655/id_rsa Username:docker}
	I1009 19:28:15.421176  423202 ssh_runner.go:195] Run: systemctl --version
	I1009 19:28:15.425633  423202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:28:15.438383  423202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:28:15.490601  423202 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-09 19:28:15.479750825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 19:28:15.491244  423202 kubeconfig.go:125] found "multinode-838655" server: "https://192.168.67.2:8443"
	I1009 19:28:15.491281  423202 api_server.go:166] Checking apiserver status ...
	I1009 19:28:15.491325  423202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:28:15.502199  423202 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1355/cgroup
	I1009 19:28:15.513035  423202 api_server.go:182] apiserver freezer: "7:freezer:/docker/e1fcd20ce7fe3beeabdf63c68796f1cd729af8a49ad0f86caad4c53cd20913c0/crio/crio-e5ec607bc5bb65bc854431e7b9ff42d3b9f026a1d71c1e83a47c6892cc1e99cd"
	I1009 19:28:15.513108  423202 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e1fcd20ce7fe3beeabdf63c68796f1cd729af8a49ad0f86caad4c53cd20913c0/crio/crio-e5ec607bc5bb65bc854431e7b9ff42d3b9f026a1d71c1e83a47c6892cc1e99cd/freezer.state
	I1009 19:28:15.522772  423202 api_server.go:204] freezer state: "THAWED"
	I1009 19:28:15.522803  423202 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1009 19:28:15.531266  423202 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1009 19:28:15.531296  423202 status.go:463] multinode-838655 apiserver status = Running (err=<nil>)
	I1009 19:28:15.531307  423202 status.go:176] multinode-838655 status: &{Name:multinode-838655 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:28:15.531324  423202 status.go:174] checking status of multinode-838655-m02 ...
	I1009 19:28:15.531647  423202 cli_runner.go:164] Run: docker container inspect multinode-838655-m02 --format={{.State.Status}}
	I1009 19:28:15.552824  423202 status.go:371] multinode-838655-m02 host status = "Running" (err=<nil>)
	I1009 19:28:15.552850  423202 host.go:66] Checking if "multinode-838655-m02" exists ...
	I1009 19:28:15.553164  423202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-838655-m02
	I1009 19:28:15.573891  423202 host.go:66] Checking if "multinode-838655-m02" exists ...
	I1009 19:28:15.574210  423202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:28:15.574256  423202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-838655-m02
	I1009 19:28:15.591555  423202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19780-297764/.minikube/machines/multinode-838655-m02/id_rsa Username:docker}
	I1009 19:28:15.680876  423202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:28:15.692814  423202 status.go:176] multinode-838655-m02 status: &{Name:multinode-838655-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:28:15.692854  423202 status.go:174] checking status of multinode-838655-m03 ...
	I1009 19:28:15.693185  423202 cli_runner.go:164] Run: docker container inspect multinode-838655-m03 --format={{.State.Status}}
	I1009 19:28:15.710676  423202 status.go:371] multinode-838655-m03 host status = "Stopped" (err=<nil>)
	I1009 19:28:15.710701  423202 status.go:384] host is not running, skipping remaining checks
	I1009 19:28:15.710729  423202 status.go:176] multinode-838655-m03 status: &{Name:multinode-838655-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-838655 node start m03 -v=7 --alsologtostderr: (8.99699365s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (93.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-838655
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-838655
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-838655: (24.816091413s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-838655 --wait=true -v=8 --alsologtostderr
E1009 19:29:29.610761  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-838655 --wait=true -v=8 --alsologtostderr: (1m8.520311121s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-838655
--- PASS: TestMultiNode/serial/RestartKeepsNodes (93.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-838655 node delete m03: (5.461313898s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-838655 stop: (23.657373072s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-838655 status: exit status 7 (90.972946ms)

                                                
                                                
-- stdout --
	multinode-838655
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-838655-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-838655 status --alsologtostderr: exit status 7 (101.311231ms)

                                                
                                                
-- stdout --
	multinode-838655
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-838655-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:30:28.942355  431024 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:30:28.942526  431024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:30:28.942536  431024 out.go:358] Setting ErrFile to fd 2...
	I1009 19:30:28.942541  431024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:30:28.942812  431024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
	I1009 19:30:28.943000  431024 out.go:352] Setting JSON to false
	I1009 19:30:28.943041  431024 mustload.go:65] Loading cluster: multinode-838655
	I1009 19:30:28.943116  431024 notify.go:220] Checking for updates...
	I1009 19:30:28.944093  431024 config.go:182] Loaded profile config "multinode-838655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:30:28.944122  431024 status.go:174] checking status of multinode-838655 ...
	I1009 19:30:28.944753  431024 cli_runner.go:164] Run: docker container inspect multinode-838655 --format={{.State.Status}}
	I1009 19:30:28.962183  431024 status.go:371] multinode-838655 host status = "Stopped" (err=<nil>)
	I1009 19:30:28.962209  431024 status.go:384] host is not running, skipping remaining checks
	I1009 19:30:28.962217  431024 status.go:176] multinode-838655 status: &{Name:multinode-838655 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:30:28.962259  431024 status.go:174] checking status of multinode-838655-m02 ...
	I1009 19:30:28.962578  431024 cli_runner.go:164] Run: docker container inspect multinode-838655-m02 --format={{.State.Status}}
	I1009 19:30:28.989138  431024 status.go:371] multinode-838655-m02 host status = "Stopped" (err=<nil>)
	I1009 19:30:28.989175  431024 status.go:384] host is not running, skipping remaining checks
	I1009 19:30:28.989184  431024 status.go:176] multinode-838655-m02 status: &{Name:multinode-838655-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-838655 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1009 19:30:32.271036  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-838655 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (50.850382353s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-838655 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.53s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-838655
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-838655-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-838655-m02 --driver=docker  --container-runtime=crio: exit status 14 (94.074353ms)

                                                
                                                
-- stdout --
	* [multinode-838655-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-838655-m02' is duplicated with machine name 'multinode-838655-m02' in profile 'multinode-838655'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-838655-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-838655-m03 --driver=docker  --container-runtime=crio: (31.958012743s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-838655
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-838655: exit status 80 (324.170545ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-838655 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-838655-m03 already exists in multinode-838655-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-838655-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-838655-m03: (1.956290245s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.39s)

                                                
                                    
x
+
TestPreload (143.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-935503 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1009 19:33:06.540865  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-935503 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m34.144891195s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-935503 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-935503 image pull gcr.io/k8s-minikube/busybox: (3.380920494s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-935503
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-935503: (5.910313142s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-935503 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-935503 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (37.422216374s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-935503 image list
helpers_test.go:175: Cleaning up "test-preload-935503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-935503
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-935503: (2.318682397s)
--- PASS: TestPreload (143.44s)

                                                
                                    
x
+
TestScheduledStopUnix (106.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-891843 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-891843 --memory=2048 --driver=docker  --container-runtime=crio: (30.131872207s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-891843 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-891843 -n scheduled-stop-891843
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-891843 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1009 19:34:53.088265  303278 retry.go:31] will retry after 85.523µs: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.088649  303278 retry.go:31] will retry after 80.454µs: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.088879  303278 retry.go:31] will retry after 212.961µs: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.089456  303278 retry.go:31] will retry after 221.792µs: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.092117  303278 retry.go:31] will retry after 633.221µs: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.093289  303278 retry.go:31] will retry after 660.785µs: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.094436  303278 retry.go:31] will retry after 1.046766ms: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.095579  303278 retry.go:31] will retry after 1.167505ms: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.097799  303278 retry.go:31] will retry after 2.539867ms: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.101038  303278 retry.go:31] will retry after 4.83583ms: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.106431  303278 retry.go:31] will retry after 4.329631ms: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.111683  303278 retry.go:31] will retry after 12.858824ms: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.125101  303278 retry.go:31] will retry after 18.639365ms: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.144337  303278 retry.go:31] will retry after 24.400002ms: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.169593  303278 retry.go:31] will retry after 23.525851ms: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
I1009 19:34:53.193856  303278 retry.go:31] will retry after 23.662143ms: open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/scheduled-stop-891843/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-891843 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-891843 -n scheduled-stop-891843
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-891843
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-891843 --schedule 15s
E1009 19:35:32.271021  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-891843
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-891843: exit status 7 (74.20892ms)

                                                
                                                
-- stdout --
	scheduled-stop-891843
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-891843 -n scheduled-stop-891843
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-891843 -n scheduled-stop-891843: exit status 7 (73.983684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-891843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-891843
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-891843: (5.063592994s)
--- PASS: TestScheduledStopUnix (106.76s)

                                                
                                    
x
+
TestInsufficientStorage (10.55s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-699370 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-699370 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.077713725s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"87ea684e-ec1b-45c5-81c0-98f5d758609d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-699370] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c0bbd06-d214-4206-96b5-504153c292a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19780"}}
	{"specversion":"1.0","id":"be3e2123-1333-4935-94c4-0015a383c5c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"66629199-89f9-4e85-9830-9843ac46e8e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig"}}
	{"specversion":"1.0","id":"3130f7b4-6b87-4f91-aee8-faaf34e27cb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube"}}
	{"specversion":"1.0","id":"b21afb1e-2e7b-4b2b-978b-beb38f9261a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4e4079fa-8682-473b-9ee8-dfd9dd33324c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f47bee1b-2d6a-42ec-9afb-c1b55d0e73b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"cb984ff9-a18f-41e9-9cb3-0d1d50947e1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8d8d37be-93f9-48b5-9761-774cb4407337","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5db7423-7b04-4172-9a31-c211c076b867","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7c405b3c-206c-4c30-a1d6-7be359a82036","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-699370\" primary control-plane node in \"insufficient-storage-699370\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8787a1ef-9e3d-4a24-be5f-2998759e5b15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1728382586-19774 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c89cab6e-43e0-46c2-985e-8e63ce26bd64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd489e7f-e7c5-46bd-ac78-3b5ae003c3be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-699370 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-699370 --output=json --layout=cluster: exit status 7 (278.491099ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-699370","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-699370","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:36:17.542347  448855 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-699370" does not appear in /home/jenkins/minikube-integration/19780-297764/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-699370 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-699370 --output=json --layout=cluster: exit status 7 (285.665334ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-699370","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-699370","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:36:17.826191  448918 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-699370" does not appear in /home/jenkins/minikube-integration/19780-297764/kubeconfig
	E1009 19:36:17.836610  448918 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/insufficient-storage-699370/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-699370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-699370
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-699370: (1.912283259s)
--- PASS: TestInsufficientStorage (10.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3174456075 start -p running-upgrade-468029 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3174456075 start -p running-upgrade-468029 --memory=2200 --vm-driver=docker  --container-runtime=crio: (47.981758576s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-468029 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-468029 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.14180486s)
helpers_test.go:175: Cleaning up "running-upgrade-468029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-468029
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-468029: (2.913161882s)
--- PASS: TestRunningBinaryUpgrade (81.98s)

                                                
                                    
x
+
TestKubernetesUpgrade (138.94s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-917512 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-917512 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m15.411622487s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-917512
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-917512: (1.969810105s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-917512 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-917512 status --format={{.Host}}: exit status 7 (143.634336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-917512 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-917512 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.528574362s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-917512 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-917512 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-917512 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (105.654409ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-917512] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-917512
	    minikube start -p kubernetes-upgrade-917512 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9175122 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-917512 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-917512 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-917512 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.058711877s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-917512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-917512
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-917512: (2.535049251s)
--- PASS: TestKubernetesUpgrade (138.94s)

                                                
                                    
x
+
TestMissingContainerUpgrade (163.54s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.4062013401 start -p missing-upgrade-781791 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.4062013401 start -p missing-upgrade-781791 --memory=2200 --driver=docker  --container-runtime=crio: (1m28.594348777s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-781791
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-781791: (10.375502817s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-781791
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-781791 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1009 19:38:06.541309  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:38:35.335376  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-781791 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m1.391815655s)
helpers_test.go:175: Cleaning up "missing-upgrade-781791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-781791
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-781791: (2.257551481s)
--- PASS: TestMissingContainerUpgrade (163.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-023240 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-023240 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (89.188839ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-023240] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-023240 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-023240 --driver=docker  --container-runtime=crio: (38.602235742s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-023240 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-023240 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-023240 --no-kubernetes --driver=docker  --container-runtime=crio: (6.323638807s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-023240 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-023240 status -o json: exit status 2 (293.112247ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-023240","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-023240
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-023240: (2.023744344s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-023240 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-023240 --no-kubernetes --driver=docker  --container-runtime=crio: (8.564615545s)
--- PASS: TestNoKubernetes/serial/Start (8.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-023240 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-023240 "sudo systemctl is-active --quiet service kubelet": exit status 1 (356.699336ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (2.116448782s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-023240
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-023240: (1.278456767s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-023240 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-023240 --driver=docker  --container-runtime=crio: (7.458619427s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-023240 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-023240 "sudo systemctl is-active --quiet service kubelet": exit status 1 (352.364201ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2921068032 start -p stopped-upgrade-895307 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2921068032 start -p stopped-upgrade-895307 --memory=2200 --vm-driver=docker  --container-runtime=crio: (43.417707429s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2921068032 -p stopped-upgrade-895307 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2921068032 -p stopped-upgrade-895307 stop: (3.007083181s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-895307 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-895307 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.491849247s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (83.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-895307
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-895307: (1.343892006s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                    
x
+
TestPause/serial/Start (62.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-051493 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1009 19:40:32.271529  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-051493 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m2.820523993s)
--- PASS: TestPause/serial/Start (62.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.57s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-051493 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-051493 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.544926582s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-834655 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-834655 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (305.775327ms)

                                                
                                                
-- stdout --
	* [false-834655] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:41:51.112115  482681 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:41:51.112284  482681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:41:51.112291  482681 out.go:358] Setting ErrFile to fd 2...
	I1009 19:41:51.112296  482681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:41:51.112587  482681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-297764/.minikube/bin
	I1009 19:41:51.113031  482681 out.go:352] Setting JSON to false
	I1009 19:41:51.115719  482681 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12259,"bootTime":1728490653,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1009 19:41:51.115881  482681 start.go:139] virtualization:  
	I1009 19:41:51.118742  482681 out.go:177] * [false-834655] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1009 19:41:51.121184  482681 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:41:51.121229  482681 notify.go:220] Checking for updates...
	I1009 19:41:51.124261  482681 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:41:51.126682  482681 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-297764/kubeconfig
	I1009 19:41:51.128801  482681 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-297764/.minikube
	I1009 19:41:51.130577  482681 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:41:51.132723  482681 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:41:51.135391  482681 config.go:182] Loaded profile config "pause-051493": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:41:51.135505  482681 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:41:51.188144  482681 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 19:41:51.188319  482681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:41:51.298124  482681 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 19:41:51.284365254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 19:41:51.298234  482681 docker.go:318] overlay module found
	I1009 19:41:51.301431  482681 out.go:177] * Using the docker driver based on user configuration
	I1009 19:41:51.303231  482681 start.go:297] selected driver: docker
	I1009 19:41:51.303249  482681 start.go:901] validating driver "docker" against <nil>
	I1009 19:41:51.303263  482681 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:41:51.306025  482681 out.go:201] 
	W1009 19:41:51.307965  482681 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1009 19:41:51.309776  482681 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-834655 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-834655

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-834655

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-834655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-834655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-834655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-834655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-834655

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-834655

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-834655

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-834655

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-834655

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-834655" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-834655" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19780-297764/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 09 Oct 2024 19:41:51 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-051493
contexts:
- context:
cluster: pause-051493
extensions:
- extension:
last-update: Wed, 09 Oct 2024 19:41:51 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-051493
name: pause-051493
current-context: pause-051493
kind: Config
preferences: {}
users:
- name: pause-051493
user:
client-certificate: /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/pause-051493/client.crt
client-key: /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/pause-051493/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-834655

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-834655"

                                                
                                                
----------------------- debugLogs end: false-834655 [took: 5.25483291s] --------------------------------
helpers_test.go:175: Cleaning up "false-834655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-834655
--- PASS: TestNetworkPlugins/group/false (5.80s)

                                                
                                    
x
+
TestPause/serial/Pause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-051493 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.96s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-051493 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-051493 --output=json --layout=cluster: exit status 2 (405.574237ms)

                                                
                                                
-- stdout --
	{"Name":"pause-051493","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-051493","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-051493 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.92s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.15s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-051493 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-051493 --alsologtostderr -v=5: (1.147368303s)
--- PASS: TestPause/serial/PauseAgain (1.15s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.16s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-051493 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-051493 --alsologtostderr -v=5: (3.159892309s)
--- PASS: TestPause/serial/DeletePaused (3.16s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-051493
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-051493: exit status 1 (22.716263ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-051493: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (128.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-446521 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-446521 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m8.419578841s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (128.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-446521 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fce50f3b-7197-4be0-aca8-89cbcac89198] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fce50f3b-7197-4be0-aca8-89cbcac89198] Running
E1009 19:45:32.271971  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003823133s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-446521 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-446521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-446521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003835983s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-446521 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-446521 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-446521 --alsologtostderr -v=3: (11.994448219s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-446521 -n old-k8s-version-446521
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-446521 -n old-k8s-version-446521: exit status 7 (74.505698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-446521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (136.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-446521 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-446521 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m16.006982235s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-446521 -n old-k8s-version-446521
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (136.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-093940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-093940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m5.295532041s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-093940 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [64cb63de-4c9a-4e08-93e8-fc58f31c206f] Pending
helpers_test.go:344: "busybox" [64cb63de-4c9a-4e08-93e8-fc58f31c206f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [64cb63de-4c9a-4e08-93e8-fc58f31c206f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004466969s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-093940 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-093940 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-093940 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.212203605s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-093940 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-093940 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-093940 --alsologtostderr -v=3: (12.0027902s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-qft9x" [664c6438-861e-4300-bc5c-6bfdddbf721d] Running
E1009 19:48:06.540770  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003377888s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-qft9x" [664c6438-861e-4300-bc5c-6bfdddbf721d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005268917s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-446521 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-093940 -n no-preload-093940
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-093940 -n no-preload-093940: exit status 7 (95.764003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-093940 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-446521 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (278.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-093940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-093940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m37.909459543s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-093940 -n no-preload-093940
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (278.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-446521 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-446521 --alsologtostderr -v=1: (1.081766778s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-446521 -n old-k8s-version-446521
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-446521 -n old-k8s-version-446521: exit status 2 (403.437535ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-446521 -n old-k8s-version-446521
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-446521 -n old-k8s-version-446521: exit status 2 (343.05528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-446521 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-446521 -n old-k8s-version-446521
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-446521 -n old-k8s-version-446521
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (57.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-023134 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-023134 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (57.593414466s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (57.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-023134 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d4bf7095-1378-4a5e-a262-8d10fdd208b5] Pending
helpers_test.go:344: "busybox" [d4bf7095-1378-4a5e-a262-8d10fdd208b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d4bf7095-1378-4a5e-a262-8d10fdd208b5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004449828s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-023134 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-023134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-023134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.005748693s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-023134 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-023134 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-023134 --alsologtostderr -v=3: (11.996011654s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-023134 -n embed-certs-023134
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-023134 -n embed-certs-023134: exit status 7 (76.907041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-023134 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (265.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-023134 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1009 19:50:24.696472  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:24.702843  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:24.714929  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:24.736341  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:24.777692  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:24.859094  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:25.020663  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:25.342942  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:25.984539  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:27.266189  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:29.828128  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:32.271924  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:34.949910  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:45.191714  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:51:05.673232  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:51:46.634893  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-023134 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m25.560407957s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-023134 -n embed-certs-023134
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (265.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-88pgc" [927cad6a-9de5-45d4-bc5b-67f041530d26] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004322057s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-88pgc" [927cad6a-9de5-45d4-bc5b-67f041530d26] Running
E1009 19:53:06.541577  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003976143s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-093940 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-093940 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-093940 --alsologtostderr -v=1
E1009 19:53:08.556330  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-093940 -n no-preload-093940
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-093940 -n no-preload-093940: exit status 2 (320.839809ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-093940 -n no-preload-093940
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-093940 -n no-preload-093940: exit status 2 (334.335029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-093940 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-093940 -n no-preload-093940
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-093940 -n no-preload-093940
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-210793 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-210793 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (50.581203517s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-210793 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1fbfd382-0476-4b2a-805a-d8ed9aa7a24e] Pending
helpers_test.go:344: "busybox" [1fbfd382-0476-4b2a-805a-d8ed9aa7a24e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1fbfd382-0476-4b2a-805a-d8ed9aa7a24e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004043851s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-210793 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zbrxn" [354f4668-2d6c-485e-ad54-48a01d718850] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003989785s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-210793 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-210793 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-210793 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-210793 --alsologtostderr -v=3: (12.106673085s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zbrxn" [354f4668-2d6c-485e-ad54-48a01d718850] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00387367s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-023134 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-023134 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-023134 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-023134 -n embed-certs-023134
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-023134 -n embed-certs-023134: exit status 2 (330.909069ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-023134 -n embed-certs-023134
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-023134 -n embed-certs-023134: exit status 2 (315.362621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-023134 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-023134 -n embed-certs-023134
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-023134 -n embed-certs-023134
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-210793 -n default-k8s-diff-port-210793
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-210793 -n default-k8s-diff-port-210793: exit status 7 (109.188104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-210793 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-210793 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-210793 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m57.735960092s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-210793 -n default-k8s-diff-port-210793
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-100719 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-100719 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (45.027554311s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-100719 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1009 19:55:15.337668  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-100719 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.198293278s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-100719 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-100719 --alsologtostderr -v=3: (2.173879672s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-100719 -n newest-cni-100719
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-100719 -n newest-cni-100719: exit status 7 (93.439763ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-100719 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-100719 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1009 19:55:24.695667  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:55:32.271593  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-100719 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (17.155949312s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-100719 -n newest-cni-100719
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-100719 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-100719 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-100719 -n newest-cni-100719
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-100719 -n newest-cni-100719: exit status 2 (337.465621ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-100719 -n newest-cni-100719
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-100719 -n newest-cni-100719: exit status 2 (328.850161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-100719 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-100719 -n newest-cni-100719
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-100719 -n newest-cni-100719
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (49.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1009 19:55:52.398204  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/old-k8s-version-446521/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (49.100878297s)
--- PASS: TestNetworkPlugins/group/auto/Start (49.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-834655 "pgrep -a kubelet"
I1009 19:56:31.050858  303278 config.go:182] Loaded profile config "auto-834655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-834655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-j5tfx" [d736cb61-1e91-483f-b704-3794a8c5bd59] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-j5tfx" [d736cb61-1e91-483f-b704-3794a8c5bd59] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004926832s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-834655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1009 19:57:54.293664  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:57:54.300144  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:57:54.311641  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:57:54.333781  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:57:54.375530  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:57:54.457021  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:57:54.624387  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (51.714547999s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2k7dl" [ae2679aa-27d7-4dc1-8124-16bd6965c0ae] Running
E1009 19:57:54.946475  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:57:55.588330  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:57:56.869695  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:57:59.431660  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004313789s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-834655 "pgrep -a kubelet"
I1009 19:58:00.935433  303278 config.go:182] Loaded profile config "kindnet-834655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-834655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n78qz" [7823d249-9ee3-42a8-a2dd-2a3a97df93ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1009 19:58:04.553087  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-n78qz" [7823d249-9ee3-42a8-a2dd-2a3a97df93ee] Running
E1009 19:58:06.541312  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003969744s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-834655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1009 19:58:35.276113  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:59:16.237455  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.076789238s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pvbpx" [b98e3237-2e9d-4d18-ad85-38c8aff40055] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003160978s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pvbpx" [b98e3237-2e9d-4d18-ad85-38c8aff40055] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003971315s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-210793 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-210793 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-210793 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-210793 -n default-k8s-diff-port-210793
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-210793 -n default-k8s-diff-port-210793: exit status 2 (368.015898ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-210793 -n default-k8s-diff-port-210793
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-210793 -n default-k8s-diff-port-210793: exit status 2 (322.080757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-210793 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-210793 -n default-k8s-diff-port-210793
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-210793 -n default-k8s-diff-port-210793
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.47s)
E1009 20:03:06.541188  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/functional-376335/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:03:15.128176  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/kindnet-834655/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:03:22.001060  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:03:35.609776  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/kindnet-834655/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-99bnb" [d963a897-6037-42dc-843e-b316050a5b2c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006074309s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m5.660645682s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-834655 "pgrep -a kubelet"
I1009 19:59:47.691554  303278 config.go:182] Loaded profile config "calico-834655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-834655 replace --force -f testdata/netcat-deployment.yaml
I1009 19:59:47.987442  303278 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-r4qlh" [c0abafad-ec5c-4e53-89f7-dc6bac62b8ac] Pending
helpers_test.go:344: "netcat-6fc964789b-r4qlh" [c0abafad-ec5c-4e53-89f7-dc6bac62b8ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-r4qlh" [c0abafad-ec5c-4e53-89f7-dc6bac62b8ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004683842s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (1.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-834655 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context calico-834655 exec deployment/netcat -- nslookup kubernetes.default: (1.205282555s)
--- PASS: TestNetworkPlugins/group/calico/DNS (1.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1009 20:00:32.271965  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/addons-527950/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:00:38.158856  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/no-preload-093940/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m21.391011135s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-834655 "pgrep -a kubelet"
I1009 20:00:51.941722  303278 config.go:182] Loaded profile config "custom-flannel-834655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-834655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xt9gz" [4706370a-f750-4de5-be93-138a5db67eea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xt9gz" [4706370a-f750-4de5-be93-138a5db67eea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003898838s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-834655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1009 20:01:31.309063  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/auto-834655/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:01:31.316746  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/auto-834655/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:01:31.328754  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/auto-834655/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:01:31.352559  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/auto-834655/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:01:31.400104  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/auto-834655/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:01:31.481457  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/auto-834655/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:01:31.642696  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/auto-834655/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:01:31.964678  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/auto-834655/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:01:32.606495  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/auto-834655/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:01:33.887999  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/auto-834655/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:01:36.450195  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/auto-834655/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:01:41.571873  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/auto-834655/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (53.392501388s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-834655 "pgrep -a kubelet"
I1009 20:01:49.906814  303278 config.go:182] Loaded profile config "enable-default-cni-834655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-834655 replace --force -f testdata/netcat-deployment.yaml
I1009 20:01:50.322049  303278 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tmsnx" [4ac075de-95e4-4511-90ca-ae9ae2498ec5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1009 20:01:51.813452  303278 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/auto-834655/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-tmsnx" [4ac075de-95e4-4511-90ca-ae9ae2498ec5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.016307099s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-834655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cjgwg" [641fa9e3-73e1-47e9-8e12-5b3314a1fff4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004646681s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-834655 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m18.897832146s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-834655 "pgrep -a kubelet"
I1009 20:02:25.944871  303278 config.go:182] Loaded profile config "flannel-834655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-834655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f77jr" [6f302198-a388-4cdd-a60d-57b6a7ecf371] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f77jr" [6f302198-a388-4cdd-a60d-57b6a7ecf371] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005535516s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-834655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-834655 "pgrep -a kubelet"
I1009 20:03:41.395381  303278 config.go:182] Loaded profile config "bridge-834655": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-834655 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jwrrl" [bfcfc5b1-5461-4668-8236-3ae685268fd6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jwrrl" [bfcfc5b1-5461-4668-8236-3ae685268fd6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00430164s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-834655 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-834655 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (29/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-506477 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-506477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-506477
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-527950 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-272548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-272548
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-834655 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-834655

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-834655

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-834655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-834655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-834655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-834655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-834655

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-834655

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-834655

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-834655

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-834655

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-834655" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-834655" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19780-297764/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 09 Oct 2024 19:41:14 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-051493
contexts:
- context:
cluster: pause-051493
extensions:
- extension:
last-update: Wed, 09 Oct 2024 19:41:14 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-051493
name: pause-051493
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-051493
user:
client-certificate: /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/pause-051493/client.crt
client-key: /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/pause-051493/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-834655

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-834655"

                                                
                                                
----------------------- debugLogs end: kubenet-834655 [took: 3.861116984s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-834655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-834655
--- SKIP: TestNetworkPlugins/group/kubenet (4.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-834655 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-834655" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19780-297764/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 09 Oct 2024 19:41:51 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-051493
contexts:
- context:
cluster: pause-051493
extensions:
- extension:
last-update: Wed, 09 Oct 2024 19:41:51 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-051493
name: pause-051493
current-context: pause-051493
kind: Config
preferences: {}
users:
- name: pause-051493
user:
client-certificate: /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/pause-051493/client.crt
client-key: /home/jenkins/minikube-integration/19780-297764/.minikube/profiles/pause-051493/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-834655

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-834655" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-834655"

                                                
                                                
----------------------- debugLogs end: cilium-834655 [took: 5.236107953s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-834655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-834655
--- SKIP: TestNetworkPlugins/group/cilium (5.39s)

                                                
                                    
Copied to clipboard