Test Report: Docker_Linux_crio 19648

                    
                      584241d6059a856bd6609ebe9456581adc627cea:2024-09-17:36253
                    
                

Test fail (16/327)

x
+
TestAddons/parallel/Registry (73.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.313987ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-8h9wm" [efc2db30-2af8-4cf7-a316-5dac4df4a136] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002322685s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9plz8" [8bc41646-54c5-4d13-8d5f-bebcdc6f15ce] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003324306s
addons_test.go:342: (dbg) Run:  kubectl --context addons-093168 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-093168 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-093168 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.080254358s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-093168 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-093168 ip
2024/09/17 08:50:45 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-093168 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-093168
helpers_test.go:235: (dbg) docker inspect addons-093168:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926",
	        "Created": "2024-09-17T08:38:37.745470595Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 398166,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-17T08:38:37.853843611Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/hostname",
	        "HostsPath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/hosts",
	        "LogPath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926-json.log",
	        "Name": "/addons-093168",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-093168:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-093168",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe-init/diff:/var/lib/docker/overlay2/22ea169b69b771958d5aa21d4886a5f67242c32d10a387f6aa1fe4f8feab18cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-093168",
	                "Source": "/var/lib/docker/volumes/addons-093168/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-093168",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-093168",
	                "name.minikube.sigs.k8s.io": "addons-093168",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a27331437cb7fe2f3918d4f21c6d0976e37e8d2fb43412d6ed2152b1f3b4fa1d",
	            "SandboxKey": "/var/run/docker/netns/a27331437cb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-093168": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "b1ff23e6ca5d5222d1d8818100c713ebb16a506c62eb4243a00007b105030e92",
	                    "EndpointID": "6cf14f071fae4cd24a1dac2c9e7c6dc188dcb38a38a4daaba6556d5caaa91067",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-093168",
	                        "f0cc99258b2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-093168 -n addons-093168
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-093168 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-093168 logs -n 25: (1.252696095s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-963544   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | -p download-only-963544              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-963544              | download-only-963544   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | -o=json --download-only              | download-only-223077   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | -p download-only-223077              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-223077              | download-only-223077   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-963544              | download-only-963544   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-223077              | download-only-223077   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | --download-only -p                   | download-docker-146413 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | download-docker-146413               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-146413            | download-docker-146413 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | --download-only -p                   | binary-mirror-713061   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | binary-mirror-713061                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45413               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-713061              | binary-mirror-713061   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| addons  | disable dashboard -p                 | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| start   | -p addons-093168 --wait=true         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:41 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | -p addons-093168                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | -p addons-093168                     |                        |         |         |                     |                     |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-093168 ip                     | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 08:38:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 08:38:14.268718  397419 out.go:345] Setting OutFile to fd 1 ...
	I0917 08:38:14.268997  397419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:14.269006  397419 out.go:358] Setting ErrFile to fd 2...
	I0917 08:38:14.269011  397419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:14.269250  397419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 08:38:14.269979  397419 out.go:352] Setting JSON to false
	I0917 08:38:14.270971  397419 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8443,"bootTime":1726553851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 08:38:14.271094  397419 start.go:139] virtualization: kvm guest
	I0917 08:38:14.273237  397419 out.go:177] * [addons-093168] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 08:38:14.274641  397419 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 08:38:14.274672  397419 notify.go:220] Checking for updates...
	I0917 08:38:14.276997  397419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 08:38:14.277996  397419 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 08:38:14.278999  397419 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	I0917 08:38:14.280101  397419 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 08:38:14.281266  397419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 08:38:14.282616  397419 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 08:38:14.304074  397419 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 08:38:14.304175  397419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:14.349142  397419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 08:38:14.340459492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:14.349250  397419 docker.go:318] overlay module found
	I0917 08:38:14.351082  397419 out.go:177] * Using the docker driver based on user configuration
	I0917 08:38:14.352358  397419 start.go:297] selected driver: docker
	I0917 08:38:14.352372  397419 start.go:901] validating driver "docker" against <nil>
	I0917 08:38:14.352389  397419 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 08:38:14.353172  397419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:14.398286  397419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 08:38:14.389900591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:14.398447  397419 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 08:38:14.398700  397419 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 08:38:14.400294  397419 out.go:177] * Using Docker driver with root privileges
	I0917 08:38:14.401571  397419 cni.go:84] Creating CNI manager for ""
	I0917 08:38:14.401650  397419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 08:38:14.401663  397419 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 08:38:14.401757  397419 start.go:340] cluster config:
	{Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 08:38:14.402986  397419 out.go:177] * Starting "addons-093168" primary control-plane node in "addons-093168" cluster
	I0917 08:38:14.404072  397419 cache.go:121] Beginning downloading kic base image for docker with crio
	I0917 08:38:14.405262  397419 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0917 08:38:14.406317  397419 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 08:38:14.406352  397419 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 08:38:14.406353  397419 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0917 08:38:14.406362  397419 cache.go:56] Caching tarball of preloaded images
	I0917 08:38:14.406475  397419 preload.go:172] Found /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 08:38:14.406487  397419 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 08:38:14.406819  397419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/config.json ...
	I0917 08:38:14.406838  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/config.json: {Name:mk614388e178da61bf05196ce91ed40880ae45f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:14.422815  397419 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0917 08:38:14.422934  397419 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0917 08:38:14.422949  397419 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0917 08:38:14.422954  397419 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0917 08:38:14.422960  397419 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0917 08:38:14.422968  397419 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0917 08:38:25.896345  397419 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0917 08:38:25.896393  397419 cache.go:194] Successfully downloaded all kic artifacts
	I0917 08:38:25.896448  397419 start.go:360] acquireMachinesLock for addons-093168: {Name:mkac87ef08cf18f2f3037d42f97e6975bc93fa09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 08:38:25.896575  397419 start.go:364] duration metric: took 100.043µs to acquireMachinesLock for "addons-093168"
	I0917 08:38:25.896610  397419 start.go:93] Provisioning new machine with config: &{Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 08:38:25.896717  397419 start.go:125] createHost starting for "" (driver="docker")
	I0917 08:38:25.898703  397419 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0917 08:38:25.898987  397419 start.go:159] libmachine.API.Create for "addons-093168" (driver="docker")
	I0917 08:38:25.899037  397419 client.go:168] LocalClient.Create starting
	I0917 08:38:25.899156  397419 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem
	I0917 08:38:26.182492  397419 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem
	I0917 08:38:26.297180  397419 cli_runner.go:164] Run: docker network inspect addons-093168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 08:38:26.312692  397419 cli_runner.go:211] docker network inspect addons-093168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 08:38:26.312773  397419 network_create.go:284] running [docker network inspect addons-093168] to gather additional debugging logs...
	I0917 08:38:26.312794  397419 cli_runner.go:164] Run: docker network inspect addons-093168
	W0917 08:38:26.328447  397419 cli_runner.go:211] docker network inspect addons-093168 returned with exit code 1
	I0917 08:38:26.328492  397419 network_create.go:287] error running [docker network inspect addons-093168]: docker network inspect addons-093168: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-093168 not found
	I0917 08:38:26.328507  397419 network_create.go:289] output of [docker network inspect addons-093168]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-093168 not found
	
	** /stderr **
	I0917 08:38:26.328630  397419 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 08:38:26.344660  397419 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b00bc0}
	I0917 08:38:26.344706  397419 network_create.go:124] attempt to create docker network addons-093168 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 08:38:26.344757  397419 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-093168 addons-093168
	I0917 08:38:26.403233  397419 network_create.go:108] docker network addons-093168 192.168.49.0/24 created
	I0917 08:38:26.403277  397419 kic.go:121] calculated static IP "192.168.49.2" for the "addons-093168" container
	I0917 08:38:26.403354  397419 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 08:38:26.419565  397419 cli_runner.go:164] Run: docker volume create addons-093168 --label name.minikube.sigs.k8s.io=addons-093168 --label created_by.minikube.sigs.k8s.io=true
	I0917 08:38:26.436382  397419 oci.go:103] Successfully created a docker volume addons-093168
	I0917 08:38:26.436456  397419 cli_runner.go:164] Run: docker run --rm --name addons-093168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093168 --entrypoint /usr/bin/test -v addons-093168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0917 08:38:33.360703  397419 cli_runner.go:217] Completed: docker run --rm --name addons-093168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093168 --entrypoint /usr/bin/test -v addons-093168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (6.924191678s)
	I0917 08:38:33.360734  397419 oci.go:107] Successfully prepared a docker volume addons-093168
	I0917 08:38:33.360748  397419 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 08:38:33.360770  397419 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 08:38:33.360820  397419 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-093168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 08:38:37.679996  397419 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-093168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.31913353s)
	I0917 08:38:37.680031  397419 kic.go:203] duration metric: took 4.319258144s to extract preloaded images to volume ...
	W0917 08:38:37.680167  397419 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0917 08:38:37.680264  397419 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 08:38:37.730224  397419 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-093168 --name addons-093168 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093168 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-093168 --network addons-093168 --ip 192.168.49.2 --volume addons-093168:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0917 08:38:38.015246  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Running}}
	I0917 08:38:38.033247  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:38.053229  397419 cli_runner.go:164] Run: docker exec addons-093168 stat /var/lib/dpkg/alternatives/iptables
	I0917 08:38:38.096763  397419 oci.go:144] the created container "addons-093168" has a running status.
	I0917 08:38:38.096799  397419 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa...
	I0917 08:38:38.316707  397419 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 08:38:38.338702  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:38.370614  397419 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 08:38:38.370640  397419 kic_runner.go:114] Args: [docker exec --privileged addons-093168 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 08:38:38.443014  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:38.468083  397419 machine.go:93] provisionDockerMachine start ...
	I0917 08:38:38.468181  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:38.487785  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:38.488024  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:38.488039  397419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 08:38:38.683369  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-093168
	
	I0917 08:38:38.683409  397419 ubuntu.go:169] provisioning hostname "addons-093168"
	I0917 08:38:38.683487  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:38.701314  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:38.701561  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:38.701586  397419 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-093168 && echo "addons-093168" | sudo tee /etc/hostname
	I0917 08:38:38.842294  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-093168
	
	I0917 08:38:38.842367  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:38.858454  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:38.858651  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:38.858675  397419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-093168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-093168/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-093168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 08:38:38.987912  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 08:38:38.987964  397419 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19648-389277/.minikube CaCertPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19648-389277/.minikube}
	I0917 08:38:38.988009  397419 ubuntu.go:177] setting up certificates
	I0917 08:38:38.988022  397419 provision.go:84] configureAuth start
	I0917 08:38:38.988088  397419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093168
	I0917 08:38:39.005336  397419 provision.go:143] copyHostCerts
	I0917 08:38:39.005415  397419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19648-389277/.minikube/key.pem (1679 bytes)
	I0917 08:38:39.005548  397419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19648-389277/.minikube/ca.pem (1082 bytes)
	I0917 08:38:39.005641  397419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19648-389277/.minikube/cert.pem (1123 bytes)
	I0917 08:38:39.005712  397419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19648-389277/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca-key.pem org=jenkins.addons-093168 san=[127.0.0.1 192.168.49.2 addons-093168 localhost minikube]
	I0917 08:38:39.090312  397419 provision.go:177] copyRemoteCerts
	I0917 08:38:39.090393  397419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 08:38:39.090456  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.106972  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.200856  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 08:38:39.222438  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 08:38:39.243612  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 08:38:39.265193  397419 provision.go:87] duration metric: took 277.150434ms to configureAuth
	I0917 08:38:39.265224  397419 ubuntu.go:193] setting minikube options for container-runtime
	I0917 08:38:39.265409  397419 config.go:182] Loaded profile config "addons-093168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 08:38:39.265521  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.282135  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:39.282384  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:39.282416  397419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 08:38:39.504192  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 08:38:39.504224  397419 machine.go:96] duration metric: took 1.036114607s to provisionDockerMachine
	I0917 08:38:39.504238  397419 client.go:171] duration metric: took 13.605190317s to LocalClient.Create
	I0917 08:38:39.504260  397419 start.go:167] duration metric: took 13.605271001s to libmachine.API.Create "addons-093168"
	I0917 08:38:39.504270  397419 start.go:293] postStartSetup for "addons-093168" (driver="docker")
	I0917 08:38:39.504289  397419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 08:38:39.504344  397419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 08:38:39.504394  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.522028  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.616778  397419 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 08:38:39.619852  397419 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 08:38:39.619881  397419 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 08:38:39.619889  397419 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 08:38:39.619897  397419 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0917 08:38:39.619908  397419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19648-389277/.minikube/addons for local assets ...
	I0917 08:38:39.619990  397419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19648-389277/.minikube/files for local assets ...
	I0917 08:38:39.620018  397419 start.go:296] duration metric: took 115.734968ms for postStartSetup
	I0917 08:38:39.620325  397419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093168
	I0917 08:38:39.637039  397419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/config.json ...
	I0917 08:38:39.637313  397419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 08:38:39.637369  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.653547  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.748768  397419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 08:38:39.752898  397419 start.go:128] duration metric: took 13.856163014s to createHost
	I0917 08:38:39.752925  397419 start.go:83] releasing machines lock for "addons-093168", held for 13.856335009s
	I0917 08:38:39.752987  397419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093168
	I0917 08:38:39.769324  397419 ssh_runner.go:195] Run: cat /version.json
	I0917 08:38:39.769390  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.769443  397419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 08:38:39.769521  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.786951  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.787867  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.941853  397419 ssh_runner.go:195] Run: systemctl --version
	I0917 08:38:39.946158  397419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 08:38:40.084473  397419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 08:38:40.088727  397419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 08:38:40.106449  397419 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 08:38:40.106528  397419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 08:38:40.132230  397419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 08:38:40.132261  397419 start.go:495] detecting cgroup driver to use...
	I0917 08:38:40.132294  397419 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 08:38:40.132351  397419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 08:38:40.146387  397419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 08:38:40.156232  397419 docker.go:217] disabling cri-docker service (if available) ...
	I0917 08:38:40.156282  397419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 08:38:40.168347  397419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 08:38:40.181162  397419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 08:38:40.257135  397419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 08:38:40.333605  397419 docker.go:233] disabling docker service ...
	I0917 08:38:40.333673  397419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 08:38:40.351601  397419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 08:38:40.362162  397419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 08:38:40.440587  397419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 08:38:40.525972  397419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 08:38:40.536529  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 08:38:40.551093  397419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 08:38:40.551153  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.559832  397419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 08:38:40.559898  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.568567  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.577380  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.585958  397419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 08:38:40.594312  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.603119  397419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.617231  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.626110  397419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 08:38:40.634005  397419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 08:38:40.641779  397419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:40.712061  397419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 08:38:40.806565  397419 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 08:38:40.806642  397419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 08:38:40.809970  397419 start.go:563] Will wait 60s for crictl version
	I0917 08:38:40.810032  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:38:40.812917  397419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 08:38:40.845887  397419 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 08:38:40.845982  397419 ssh_runner.go:195] Run: crio --version
	I0917 08:38:40.880638  397419 ssh_runner.go:195] Run: crio --version
	I0917 08:38:40.915800  397419 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0917 08:38:40.917229  397419 cli_runner.go:164] Run: docker network inspect addons-093168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 08:38:40.933605  397419 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 08:38:40.937163  397419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 08:38:40.947226  397419 kubeadm.go:883] updating cluster {Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 08:38:40.947379  397419 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 08:38:40.947455  397419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 08:38:41.008460  397419 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 08:38:41.008482  397419 crio.go:433] Images already preloaded, skipping extraction
	I0917 08:38:41.008524  397419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 08:38:41.040345  397419 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 08:38:41.040370  397419 cache_images.go:84] Images are preloaded, skipping loading
	I0917 08:38:41.040378  397419 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0917 08:38:41.040480  397419 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-093168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 08:38:41.040565  397419 ssh_runner.go:195] Run: crio config
	I0917 08:38:41.080761  397419 cni.go:84] Creating CNI manager for ""
	I0917 08:38:41.080783  397419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 08:38:41.080795  397419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 08:38:41.080819  397419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-093168 NodeName:addons-093168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 08:38:41.080967  397419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-093168"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 08:38:41.081023  397419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 08:38:41.089456  397419 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 08:38:41.089531  397419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 08:38:41.097438  397419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 08:38:41.113372  397419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 08:38:41.129326  397419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0917 08:38:41.144885  397419 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0917 08:38:41.147998  397419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 08:38:41.157624  397419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:41.237475  397419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 08:38:41.249661  397419 certs.go:68] Setting up /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168 for IP: 192.168.49.2
	I0917 08:38:41.249683  397419 certs.go:194] generating shared ca certs ...
	I0917 08:38:41.249699  397419 certs.go:226] acquiring lock for ca certs: {Name:mk8da29d5216ae8373400245c621790543881904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.249825  397419 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key
	I0917 08:38:41.614404  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt ...
	I0917 08:38:41.614440  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt: {Name:mkd45d6a60b00dd159e65c0f1b6c2e5a8afabc01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.614666  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key ...
	I0917 08:38:41.614685  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key: {Name:mk5291de481583f940222c6612a96e62ccd87eec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.614788  397419 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key
	I0917 08:38:41.754351  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.crt ...
	I0917 08:38:41.754383  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.crt: {Name:mk27ce36d6db90e160bdb0276068ed953effdbf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.754586  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key ...
	I0917 08:38:41.754606  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key: {Name:mk3afa86519521f4fca302906407d013abfb0d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.754709  397419 certs.go:256] generating profile certs ...
	I0917 08:38:41.754798  397419 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.key
	I0917 08:38:41.754829  397419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt with IP's: []
	I0917 08:38:42.064154  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt ...
	I0917 08:38:42.064185  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: {Name:mk5cb5afe904908b0cba1bf17d824eee5c984153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.064362  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.key ...
	I0917 08:38:42.064377  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.key: {Name:mkf2e14b11acd2448049e231dd4ead7716664bd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.064476  397419 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d
	I0917 08:38:42.064507  397419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0917 08:38:42.261028  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d ...
	I0917 08:38:42.261067  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d: {Name:mk077ce39ea3bb757e6d6ad979b544d7da0b437c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.261244  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d ...
	I0917 08:38:42.261257  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d: {Name:mk33433d67eea38775352092fed9c6a72038761a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.261329  397419 certs.go:381] copying /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d -> /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt
	I0917 08:38:42.261432  397419 certs.go:385] copying /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d -> /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key
	I0917 08:38:42.261485  397419 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key
	I0917 08:38:42.261504  397419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt with IP's: []
	I0917 08:38:42.508375  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt ...
	I0917 08:38:42.508413  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt: {Name:mk89431354833730cad316e358f6ad32f98671ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.508622  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key ...
	I0917 08:38:42.508638  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key: {Name:mk49266541348c002ddfe954fcac3e31b23d5e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.508851  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 08:38:42.508900  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem (1082 bytes)
	I0917 08:38:42.508938  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem (1123 bytes)
	I0917 08:38:42.508966  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/key.pem (1679 bytes)
	I0917 08:38:42.509614  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 08:38:42.532076  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 08:38:42.553868  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 08:38:42.575679  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 08:38:42.597095  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 08:38:42.618358  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 08:38:42.639563  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 08:38:42.660637  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 08:38:42.681627  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 08:38:42.702968  397419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 08:38:42.718889  397419 ssh_runner.go:195] Run: openssl version
	I0917 08:38:42.724037  397419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 08:38:42.732397  397419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:42.735486  397419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:42.735536  397419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:42.741586  397419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 08:38:42.749881  397419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 08:38:42.752874  397419 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 08:38:42.752930  397419 kubeadm.go:392] StartCluster: {Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 08:38:42.753025  397419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 08:38:42.753085  397419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 08:38:42.786903  397419 cri.go:89] found id: ""
	I0917 08:38:42.786985  397419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 08:38:42.796179  397419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 08:38:42.804749  397419 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 08:38:42.804799  397419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 08:38:42.812984  397419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 08:38:42.813000  397419 kubeadm.go:157] found existing configuration files:
	
	I0917 08:38:42.813037  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 08:38:42.820866  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 08:38:42.820930  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 08:38:42.828240  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 08:38:42.835643  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 08:38:42.835737  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 08:38:42.843259  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 08:38:42.851080  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 08:38:42.851131  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 08:38:42.858437  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 08:38:42.866098  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 08:38:42.866156  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 08:38:42.873252  397419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 08:38:42.908386  397419 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 08:38:42.908464  397419 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 08:38:42.923732  397419 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 08:38:42.923800  397419 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0917 08:38:42.923834  397419 kubeadm.go:310] OS: Linux
	I0917 08:38:42.923879  397419 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 08:38:42.923964  397419 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0917 08:38:42.924025  397419 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 08:38:42.924093  397419 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 08:38:42.924167  397419 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 08:38:42.924236  397419 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 08:38:42.924302  397419 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 08:38:42.924375  397419 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 08:38:42.924442  397419 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0917 08:38:42.973444  397419 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 08:38:42.973610  397419 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 08:38:42.973749  397419 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 08:38:42.979391  397419 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 08:38:42.982351  397419 out.go:235]   - Generating certificates and keys ...
	I0917 08:38:42.982445  397419 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 08:38:42.982558  397419 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 08:38:43.304222  397419 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 08:38:43.356991  397419 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 08:38:43.472470  397419 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 08:38:43.631625  397419 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 08:38:43.778369  397419 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 08:38:43.778571  397419 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-093168 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 08:38:44.236292  397419 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 08:38:44.236448  397419 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-093168 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 08:38:44.386759  397419 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 08:38:44.547662  397419 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 08:38:45.256381  397419 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 08:38:45.256470  397419 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 08:38:45.352447  397419 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 08:38:45.496534  397419 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 08:38:45.783093  397419 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 08:38:45.948400  397419 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 08:38:46.126268  397419 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 08:38:46.126739  397419 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 08:38:46.129290  397419 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 08:38:46.131498  397419 out.go:235]   - Booting up control plane ...
	I0917 08:38:46.131624  397419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 08:38:46.131735  397419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 08:38:46.131825  397419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 08:38:46.139890  397419 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 08:38:46.145973  397419 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 08:38:46.146041  397419 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 08:38:46.229694  397419 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 08:38:46.229838  397419 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 08:38:46.732374  397419 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.404175ms
	I0917 08:38:46.732502  397419 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 08:38:51.232483  397419 kubeadm.go:310] [api-check] The API server is healthy after 4.501470708s
	I0917 08:38:51.243357  397419 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 08:38:51.254150  397419 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 08:38:51.272346  397419 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 08:38:51.272569  397419 kubeadm.go:310] [mark-control-plane] Marking the node addons-093168 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 08:38:51.279966  397419 kubeadm.go:310] [bootstrap-token] Using token: k80no8.z164l1wfcaclt3ve
	I0917 08:38:51.281525  397419 out.go:235]   - Configuring RBAC rules ...
	I0917 08:38:51.281680  397419 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 08:38:51.284683  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 08:38:51.290003  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 08:38:51.293675  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 08:38:51.296125  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 08:38:51.298653  397419 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 08:38:51.638681  397419 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 08:38:52.057839  397419 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 08:38:52.638211  397419 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 08:38:52.639067  397419 kubeadm.go:310] 
	I0917 08:38:52.639151  397419 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 08:38:52.639161  397419 kubeadm.go:310] 
	I0917 08:38:52.639256  397419 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 08:38:52.639296  397419 kubeadm.go:310] 
	I0917 08:38:52.639346  397419 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 08:38:52.639417  397419 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 08:38:52.639470  397419 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 08:38:52.639478  397419 kubeadm.go:310] 
	I0917 08:38:52.639522  397419 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 08:38:52.639529  397419 kubeadm.go:310] 
	I0917 08:38:52.639568  397419 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 08:38:52.639593  397419 kubeadm.go:310] 
	I0917 08:38:52.639638  397419 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 08:38:52.639707  397419 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 08:38:52.639770  397419 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 08:38:52.639776  397419 kubeadm.go:310] 
	I0917 08:38:52.639844  397419 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 08:38:52.639938  397419 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 08:38:52.639972  397419 kubeadm.go:310] 
	I0917 08:38:52.640081  397419 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k80no8.z164l1wfcaclt3ve \
	I0917 08:38:52.640203  397419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:df9ded58c525a6d55df91cd644932b8a694d03f6beda3e691beb74ea1851cf09 \
	I0917 08:38:52.640238  397419 kubeadm.go:310] 	--control-plane 
	I0917 08:38:52.640248  397419 kubeadm.go:310] 
	I0917 08:38:52.640345  397419 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 08:38:52.640356  397419 kubeadm.go:310] 
	I0917 08:38:52.640453  397419 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k80no8.z164l1wfcaclt3ve \
	I0917 08:38:52.640571  397419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:df9ded58c525a6d55df91cd644932b8a694d03f6beda3e691beb74ea1851cf09 
	I0917 08:38:52.642642  397419 kubeadm.go:310] W0917 08:38:42.905770    1305 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 08:38:52.643061  397419 kubeadm.go:310] W0917 08:38:42.906409    1305 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 08:38:52.643311  397419 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0917 08:38:52.643438  397419 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 08:38:52.643454  397419 cni.go:84] Creating CNI manager for ""
	I0917 08:38:52.643464  397419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 08:38:52.645324  397419 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0917 08:38:52.646624  397419 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 08:38:52.650315  397419 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0917 08:38:52.650335  397419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 08:38:52.667218  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 08:38:52.889823  397419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 08:38:52.889885  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:52.889918  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-093168 minikube.k8s.io/updated_at=2024_09_17T08_38_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61 minikube.k8s.io/name=addons-093168 minikube.k8s.io/primary=true
	I0917 08:38:52.897123  397419 ops.go:34] apiserver oom_adj: -16
	I0917 08:38:53.039509  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:53.539727  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:54.039909  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:54.539969  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:55.040209  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:55.540163  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:56.039997  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:56.540545  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:57.039787  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:57.104143  397419 kubeadm.go:1113] duration metric: took 4.214320429s to wait for elevateKubeSystemPrivileges
	I0917 08:38:57.104195  397419 kubeadm.go:394] duration metric: took 14.351272056s to StartCluster
	I0917 08:38:57.104218  397419 settings.go:142] acquiring lock: {Name:mk95cfba95882d4e40150b5e054772c8fe045040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:57.104356  397419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 08:38:57.104769  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/kubeconfig: {Name:mk341f12644f68f3679935ee94cc49d156e11570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:57.105015  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 08:38:57.105016  397419 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 08:38:57.105108  397419 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 08:38:57.105239  397419 config.go:182] Loaded profile config "addons-093168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 08:38:57.105256  397419 addons.go:69] Setting cloud-spanner=true in profile "addons-093168"
	I0917 08:38:57.105271  397419 addons.go:69] Setting gcp-auth=true in profile "addons-093168"
	I0917 08:38:57.105277  397419 addons.go:234] Setting addon cloud-spanner=true in "addons-093168"
	I0917 08:38:57.105276  397419 addons.go:69] Setting storage-provisioner=true in profile "addons-093168"
	I0917 08:38:57.105278  397419 addons.go:69] Setting volumesnapshots=true in profile "addons-093168"
	I0917 08:38:57.105298  397419 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-093168"
	I0917 08:38:57.105238  397419 addons.go:69] Setting yakd=true in profile "addons-093168"
	I0917 08:38:57.105296  397419 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-093168"
	I0917 08:38:57.105312  397419 addons.go:69] Setting registry=true in profile "addons-093168"
	I0917 08:38:57.105312  397419 addons.go:234] Setting addon volumesnapshots=true in "addons-093168"
	I0917 08:38:57.105317  397419 addons.go:234] Setting addon yakd=true in "addons-093168"
	I0917 08:38:57.105321  397419 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-093168"
	I0917 08:38:57.105323  397419 addons.go:69] Setting helm-tiller=true in profile "addons-093168"
	I0917 08:38:57.105332  397419 addons.go:69] Setting metrics-server=true in profile "addons-093168"
	I0917 08:38:57.105335  397419 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-093168"
	I0917 08:38:57.105259  397419 addons.go:69] Setting volcano=true in profile "addons-093168"
	I0917 08:38:57.105344  397419 addons.go:234] Setting addon metrics-server=true in "addons-093168"
	I0917 08:38:57.105245  397419 addons.go:69] Setting inspektor-gadget=true in profile "addons-093168"
	I0917 08:38:57.105347  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105351  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105324  397419 addons.go:234] Setting addon registry=true in "addons-093168"
	I0917 08:38:57.105357  397419 addons.go:234] Setting addon inspektor-gadget=true in "addons-093168"
	I0917 08:38:57.105362  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105353  397419 addons.go:234] Setting addon volcano=true in "addons-093168"
	I0917 08:38:57.105486  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105291  397419 mustload.go:65] Loading cluster: addons-093168
	I0917 08:38:57.105336  397419 addons.go:234] Setting addon helm-tiller=true in "addons-093168"
	I0917 08:38:57.105608  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105707  397419 config.go:182] Loaded profile config "addons-093168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 08:38:57.105371  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105931  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105935  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105960  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105377  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.106050  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.106193  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105960  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.106458  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.106627  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105313  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105345  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.107248  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105376  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105250  397419 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-093168"
	I0917 08:38:57.108052  397419 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-093168"
	I0917 08:38:57.108362  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105380  397419 addons.go:69] Setting default-storageclass=true in profile "addons-093168"
	I0917 08:38:57.105302  397419 addons.go:234] Setting addon storage-provisioner=true in "addons-093168"
	I0917 08:38:57.105388  397419 addons.go:69] Setting ingress-dns=true in profile "addons-093168"
	I0917 08:38:57.105386  397419 addons.go:69] Setting ingress=true in profile "addons-093168"
	I0917 08:38:57.108644  397419 addons.go:234] Setting addon ingress=true in "addons-093168"
	I0917 08:38:57.108680  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.108700  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.108747  397419 addons.go:234] Setting addon ingress-dns=true in "addons-093168"
	I0917 08:38:57.108788  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.108821  397419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-093168"
	I0917 08:38:57.112690  397419 out.go:177] * Verifying Kubernetes components...
	I0917 08:38:57.114189  397419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:57.124402  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.124402  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.124587  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.125036  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.125084  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.125993  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.143502  397419 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 08:38:57.144872  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 08:38:57.144901  397419 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 08:38:57.144980  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	W0917 08:38:57.150681  397419 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0917 08:38:57.153691  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.155722  397419 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0917 08:38:57.159231  397419 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 08:38:57.159256  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0917 08:38:57.159314  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.172289  397419 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 08:38:57.176642  397419 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 08:38:57.176666  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 08:38:57.176733  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.193988  397419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 08:38:57.196004  397419 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 08:38:57.197115  397419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 08:38:57.197136  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 08:38:57.197200  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.202125  397419 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 08:38:57.203455  397419 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 08:38:57.203530  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 08:38:57.203679  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.204660  397419 addons.go:234] Setting addon default-storageclass=true in "addons-093168"
	I0917 08:38:57.204707  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.204824  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 08:38:57.205196  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.207284  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:38:57.207449  397419 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 08:38:57.208612  397419 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 08:38:57.208633  397419 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 08:38:57.208701  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.208883  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:38:57.210517  397419 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 08:38:57.210538  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 08:38:57.210595  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.210853  397419 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 08:38:57.212148  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 08:38:57.212167  397419 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 08:38:57.212221  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.216414  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.219236  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 08:38:57.221033  397419 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-093168"
	I0917 08:38:57.221085  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.221137  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 08:38:57.221157  397419 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 08:38:57.221227  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.221586  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.221963  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 08:38:57.223885  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 08:38:57.225253  397419 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 08:38:57.226499  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 08:38:57.226722  397419 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 08:38:57.226737  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 08:38:57.226802  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.229771  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 08:38:57.229842  397419 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 08:38:57.231204  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 08:38:57.231925  397419 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 08:38:57.231954  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 08:38:57.232015  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.240168  397419 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 08:38:57.240188  397419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 08:38:57.240249  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.251934  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.253019  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.256107  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.256961  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 08:38:57.270556  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 08:38:57.272877  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 08:38:57.274130  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 08:38:57.274138  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.274160  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 08:38:57.274232  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.286114  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.286432  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.286552  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.287928  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.292989  397419 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 08:38:57.293246  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.295525  397419 out.go:177]   - Using image docker.io/busybox:stable
	I0917 08:38:57.295767  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.297062  397419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 08:38:57.297077  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 08:38:57.297117  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.299372  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.306226  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.314733  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	W0917 08:38:57.337065  397419 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 08:38:57.337105  397419 retry.go:31] will retry after 135.437372ms: ssh: handshake failed: EOF
	I0917 08:38:57.346335  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 08:38:57.356789  397419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 08:38:57.538116  397419 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 08:38:57.538148  397419 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0917 08:38:57.541546  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 08:38:57.642930  397419 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 08:38:57.642961  397419 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0917 08:38:57.652875  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 08:38:57.652902  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 08:38:57.744251  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 08:38:57.752468  397419 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 08:38:57.752499  397419 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 08:38:57.753674  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 08:38:57.753698  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 08:38:57.833558  397419 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 08:38:57.833662  397419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 08:38:57.834064  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 08:38:57.835232  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 08:38:57.842341  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 08:38:57.842375  397419 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 08:38:57.849540  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 08:38:57.853917  397419 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 08:38:57.853947  397419 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 08:38:57.936443  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 08:38:57.936758  397419 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 08:38:57.936784  397419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 08:38:57.938952  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 08:38:57.941233  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 08:38:57.941258  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 08:38:58.033712  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 08:38:58.034229  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 08:38:58.034295  397419 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 08:38:58.046437  397419 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 08:38:58.046529  397419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 08:38:58.047136  397419 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 08:38:58.047196  397419 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 08:38:58.133693  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 08:38:58.133782  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 08:38:58.139956  397419 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 08:38:58.139985  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 08:38:58.233802  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 08:38:58.233848  397419 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 08:38:58.252638  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 08:38:58.252687  397419 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 08:38:58.254386  397419 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 08:38:58.254464  397419 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 08:38:58.333784  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 08:38:58.333878  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 08:38:58.449224  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 08:38:58.449259  397419 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 08:38:58.449658  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 08:38:58.548889  397419 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:38:58.548923  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 08:38:58.633498  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 08:38:58.633532  397419 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 08:38:58.633842  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 08:38:58.633864  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 08:38:58.634541  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 08:38:58.750791  397419 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 08:38:58.750827  397419 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 08:38:58.936229  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:38:59.233524  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 08:38:59.233625  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 08:38:59.333560  397419 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 08:38:59.333595  397419 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 08:38:59.653548  397419 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 08:38:59.653582  397419 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 08:38:59.654019  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 08:38:59.654039  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 08:38:59.750974  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 08:38:59.844245  397419 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.497868768s)
	I0917 08:38:59.844279  397419 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0917 08:38:59.845507  397419 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.48868759s)
	I0917 08:38:59.846428  397419 node_ready.go:35] waiting up to 6m0s for node "addons-093168" to be "Ready" ...
	I0917 08:39:00.150766  397419 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 08:39:00.150864  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 08:39:00.241261  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 08:39:00.241385  397419 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 08:39:00.434396  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 08:39:00.434751  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 08:39:00.434837  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 08:39:00.550189  397419 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-093168" context rescaled to 1 replicas
	I0917 08:39:00.748755  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 08:39:00.748843  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 08:39:00.937410  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 08:39:00.937442  397419 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 08:39:01.233803  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 08:39:01.943544  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:03.261179  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.719582492s)
	I0917 08:39:03.261217  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.516878812s)
	I0917 08:39:03.261224  397419 addons.go:475] Verifying addon ingress=true in "addons-093168"
	I0917 08:39:03.261298  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.427173682s)
	I0917 08:39:03.261369  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.426103213s)
	I0917 08:39:03.261406  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.411830401s)
	I0917 08:39:03.261493  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.325021448s)
	I0917 08:39:03.261534  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.322551933s)
	I0917 08:39:03.261613  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.227807299s)
	I0917 08:39:03.261653  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.811965691s)
	I0917 08:39:03.261677  397419 addons.go:475] Verifying addon registry=true in "addons-093168"
	I0917 08:39:03.261733  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.627156118s)
	I0917 08:39:03.261799  397419 addons.go:475] Verifying addon metrics-server=true in "addons-093168"
	I0917 08:39:03.263039  397419 out.go:177] * Verifying ingress addon...
	I0917 08:39:03.264106  397419 out.go:177] * Verifying registry addon...
	I0917 08:39:03.265798  397419 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 08:39:03.266577  397419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 08:39:03.338558  397419 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 08:39:03.338666  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:03.338842  397419 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 08:39:03.338910  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 08:39:03.344429  397419 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0917 08:39:03.835535  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:03.868020  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.931736693s)
	W0917 08:39:03.868122  397419 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 08:39:03.868142  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.117119927s)
	I0917 08:39:03.868181  397419 retry.go:31] will retry after 226.647603ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 08:39:03.868254  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.433802493s)
	I0917 08:39:03.869652  397419 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-093168 service yakd-dashboard -n yakd-dashboard
	
	I0917 08:39:03.934770  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:04.095668  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:39:04.269371  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:04.269859  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:04.350132  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:04.360728  397419 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 08:39:04.360808  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:39:04.384783  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:39:04.471408  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.23753895s)
	I0917 08:39:04.471460  397419 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-093168"
	I0917 08:39:04.473008  397419 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 08:39:04.475211  397419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 08:39:04.535330  397419 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 08:39:04.535353  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:04.598789  397419 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 08:39:04.615582  397419 addons.go:234] Setting addon gcp-auth=true in "addons-093168"
	I0917 08:39:04.615652  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:39:04.616089  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:39:04.633132  397419 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 08:39:04.633192  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:39:04.651065  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:39:04.769973  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:04.770233  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:05.035291  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:05.335175  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:05.336078  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:05.535256  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:05.769510  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:05.769763  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:05.979262  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:06.269556  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:06.269756  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:06.350348  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:06.479032  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:06.769819  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:06.770387  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:06.979151  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:06.991964  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.89623192s)
	I0917 08:39:06.992009  397419 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.358851016s)
	I0917 08:39:06.993965  397419 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 08:39:06.995369  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:39:06.996678  397419 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 08:39:06.996699  397419 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 08:39:07.050138  397419 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 08:39:07.050166  397419 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 08:39:07.070212  397419 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 08:39:07.070239  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 08:39:07.088585  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 08:39:07.269903  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:07.270150  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:07.478566  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:07.742409  397419 addons.go:475] Verifying addon gcp-auth=true in "addons-093168"
	I0917 08:39:07.743971  397419 out.go:177] * Verifying gcp-auth addon...
	I0917 08:39:07.746772  397419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 08:39:07.749628  397419 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 08:39:07.749648  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:07.850058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:07.850470  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:07.980638  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:08.250181  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:08.269219  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:08.269486  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:08.478757  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:08.750637  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:08.769245  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:08.769763  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:08.849706  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:08.978545  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:09.250459  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:09.269495  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:09.269663  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:09.479237  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:09.749689  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:09.769399  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:09.769720  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:09.978863  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:10.250410  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:10.269526  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:10.269619  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:10.478837  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:10.750940  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:10.769805  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:10.770515  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:10.979280  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:11.249995  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:11.269719  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:11.270190  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:11.350491  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:11.478320  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:11.750247  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:11.769390  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:11.769429  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:11.978986  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:12.250516  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:12.269587  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:12.269693  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:12.480184  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:12.750404  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:12.769444  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:12.769591  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:12.978948  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:13.250817  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:13.269637  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:13.270016  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:13.479104  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:13.749738  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:13.769523  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:13.769820  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:13.850119  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:13.978949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:14.249884  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:14.269638  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:14.270062  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:14.479204  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:14.749928  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:14.769438  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:14.769821  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:14.978839  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:15.250562  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:15.269409  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:15.269947  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:15.478860  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:15.750835  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:15.769345  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:15.770015  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:15.850276  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:15.979293  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:16.250064  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:16.269826  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:16.270274  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:16.478595  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:16.750278  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:16.769441  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:16.769627  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:16.978785  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:17.249585  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:17.269341  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:17.269848  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:17.479260  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:17.749952  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:17.769578  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:17.769936  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:17.979325  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:18.249779  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:18.269465  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:18.269775  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:18.350075  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:18.478976  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:18.750758  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:18.769496  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:18.769979  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:18.979120  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:19.249745  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:19.269362  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:19.269944  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:19.479390  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:19.749971  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:19.769917  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:19.770115  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:19.978384  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:20.250150  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:20.269613  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:20.270040  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:20.479591  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:20.750572  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:20.769329  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:20.769808  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:20.849500  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:20.978496  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:21.250173  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:21.269174  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:21.269534  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:21.478769  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:21.751128  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:21.769357  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:21.769371  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:21.978913  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:22.250688  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:22.269349  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:22.269695  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:22.478881  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:22.750753  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:22.769486  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:22.769809  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:22.849938  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:22.981047  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:23.249913  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:23.269440  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:23.269919  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:23.478892  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:23.750856  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:23.769354  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:23.769865  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:23.978955  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:24.249899  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:24.269545  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:24.269991  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:24.479144  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:24.750022  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:24.769833  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:24.770464  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:24.978298  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:25.250252  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:25.269224  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:25.269557  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:25.350289  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:25.479127  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:25.749639  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:25.769205  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:25.769585  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:25.979064  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:26.250038  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:26.269663  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:26.270152  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:26.478995  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:26.750285  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:26.769308  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:26.769370  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:26.978745  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:27.250676  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:27.269322  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:27.269652  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:27.478412  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:27.750691  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:27.769200  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:27.769604  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:27.849933  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:27.979206  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:28.249964  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:28.269520  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:28.269919  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:28.479193  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:28.749933  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:28.769877  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:28.770211  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:28.979141  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:29.249874  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:29.270072  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:29.270348  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:29.478073  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:29.749899  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:29.769818  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:29.770374  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:29.979288  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:30.250272  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:30.269500  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:30.269546  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:30.350342  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:30.479086  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:30.749787  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:30.769541  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:30.770013  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:30.979093  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:31.250841  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:31.269421  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:31.269882  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:31.479027  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:31.749892  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:31.769497  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:31.769834  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:31.979224  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:32.250379  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:32.269381  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:32.269400  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:32.479357  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:32.750376  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:32.769602  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:32.769757  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:32.850423  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:32.979114  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:33.251004  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:33.269908  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:33.270175  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:33.479600  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:33.749949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:33.769584  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:33.770008  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:33.979236  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:34.250012  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:34.269687  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:34.270180  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:34.479255  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:34.750023  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:34.769580  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:34.770002  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:34.978387  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:35.250069  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:35.269828  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:35.270241  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:35.349451  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:35.478206  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:35.749945  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:35.769452  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:35.769865  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:35.978859  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:36.250835  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:36.269592  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:36.269917  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:36.478473  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:36.750428  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:36.769595  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:36.769685  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:36.978362  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:37.250516  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:37.269304  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:37.269681  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:37.350217  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:37.479043  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:37.750460  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:37.769597  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:37.769948  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:37.978771  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:38.250668  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:38.269338  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:38.269667  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:38.478938  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:38.750692  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:38.769540  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:38.770044  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:38.979152  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:39.249775  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:39.269195  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:39.269607  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:39.478771  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:39.750626  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:39.769136  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:39.769575  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:39.850038  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:39.979047  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:40.249695  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:40.269441  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:40.269779  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:40.479084  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:40.749817  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:40.769332  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:40.769870  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:40.978708  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:41.250949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:41.269314  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:41.269830  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:41.480399  397419 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 08:39:41.480422  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:41.760397  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:41.837192  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:41.837670  397419 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 08:39:41.837689  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:41.849891  397419 node_ready.go:49] node "addons-093168" has status "Ready":"True"
	I0917 08:39:41.849914  397419 node_ready.go:38] duration metric: took 42.0034583s for node "addons-093168" to be "Ready" ...
	I0917 08:39:41.849924  397419 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 08:39:41.858669  397419 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7lhft" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:42.038738  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:42.251747  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:42.352912  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:42.353583  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:42.479530  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:42.750176  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:42.770265  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:42.770895  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:42.979804  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:43.251776  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:43.351669  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:43.352090  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:43.364736  397419 pod_ready.go:93] pod "coredns-7c65d6cfc9-7lhft" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.364757  397419 pod_ready.go:82] duration metric: took 1.50606765s for pod "coredns-7c65d6cfc9-7lhft" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.364777  397419 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.369471  397419 pod_ready.go:93] pod "etcd-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.369494  397419 pod_ready.go:82] duration metric: took 4.709608ms for pod "etcd-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.369508  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.373655  397419 pod_ready.go:93] pod "kube-apiserver-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.373672  397419 pod_ready.go:82] duration metric: took 4.156439ms for pod "kube-apiserver-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.373680  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.377527  397419 pod_ready.go:93] pod "kube-controller-manager-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.377561  397419 pod_ready.go:82] duration metric: took 3.873985ms for pod "kube-controller-manager-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.377572  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t77c5" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.450713  397419 pod_ready.go:93] pod "kube-proxy-t77c5" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.450741  397419 pod_ready.go:82] duration metric: took 73.161651ms for pod "kube-proxy-t77c5" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.450755  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.479047  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:43.750717  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:43.769660  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:43.769998  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:43.850947  397419 pod_ready.go:93] pod "kube-scheduler-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.850971  397419 pod_ready.go:82] duration metric: took 400.20789ms for pod "kube-scheduler-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.850982  397419 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.980093  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:44.250260  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:44.269521  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:44.270044  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:44.479161  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:44.750804  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:44.770420  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:44.770636  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.035777  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:45.250723  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:45.269748  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:45.270038  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.480689  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:45.750763  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:45.769885  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:45.770680  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.857292  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:45.980017  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:46.250727  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:46.269788  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:46.270046  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:46.539234  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:46.751501  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:46.835507  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:46.836067  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.036749  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:47.250892  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:47.336881  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:47.336877  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.536654  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:47.750566  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:47.770379  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:47.770654  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.857353  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:47.980545  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:48.251036  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:48.270119  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:48.270766  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:48.481111  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:48.751338  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:48.770188  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:48.771890  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:48.980058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:49.250249  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:49.270268  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:49.270358  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:49.480036  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:49.750762  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:49.770978  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:49.772174  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:49.857941  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:49.980041  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:50.250706  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:50.269862  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:50.270014  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:50.480731  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:50.751060  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:50.770120  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:50.770641  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.035548  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:51.250927  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:51.337208  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:51.337503  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.480679  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:51.750819  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:51.769976  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:51.770649  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.980192  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:52.250287  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:52.273280  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:52.353216  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:52.356559  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:52.479644  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:52.750695  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:52.769840  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:52.769992  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:52.980341  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:53.250812  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:53.269713  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:53.269993  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:53.479306  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:53.751203  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:53.769942  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:53.770231  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:53.982444  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:54.251381  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:54.270391  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:54.270907  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:54.357551  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:54.479329  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:54.750585  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:54.769800  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:54.770242  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:54.980330  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:55.250105  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:55.272058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:55.272343  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:55.480049  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:55.750228  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:55.769721  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:55.769811  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:55.979630  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:56.250644  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:56.270143  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:56.270801  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:56.361917  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:56.535770  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:56.750820  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:56.770318  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:56.834677  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:57.037436  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:57.251657  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:57.338559  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:57.340296  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:57.539728  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:57.750702  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:57.836323  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:57.836465  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.035687  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:58.250979  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:58.270445  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:58.270847  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.480099  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:58.750815  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:58.770260  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:58.770835  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.858855  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:58.980298  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:59.250242  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:59.271058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:59.271285  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:59.534742  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:59.749993  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:59.770735  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:59.770822  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:59.980421  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:00.250549  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:00.269795  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:00.270066  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:00.481133  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:00.750352  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:00.770060  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:00.770078  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:00.980516  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:01.250748  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:01.269906  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:01.270542  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:01.357167  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:01.479831  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:01.750735  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:01.851522  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:01.852196  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:01.980255  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:02.250668  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:02.270004  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:02.270239  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:02.480121  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:02.750937  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:02.770293  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:02.770548  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:02.980319  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:03.250471  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:03.269687  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:03.270015  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:03.358379  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:03.480308  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:03.750910  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:03.769915  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:03.770350  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:03.980888  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:04.250949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:04.334052  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:04.334547  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:04.536288  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:04.751331  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:04.769923  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:04.770074  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:04.979484  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:05.250753  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:05.269588  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:05.270367  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:05.479717  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:05.750044  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:05.770343  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:05.770697  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:05.857232  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:05.980252  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:06.250527  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:06.269894  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:06.270178  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:06.479711  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:06.750183  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:06.771071  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:06.771665  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:06.979659  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:07.251357  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:07.270510  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:07.270939  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:07.480189  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:07.750845  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:07.770209  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:07.771533  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:07.857980  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:07.983095  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:08.250342  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:08.270999  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:08.271094  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:08.479975  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:08.751137  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:08.770431  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:08.770712  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:08.980321  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:09.251024  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:09.270126  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:09.270735  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:09.480983  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:09.751277  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:09.769930  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:09.770147  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:09.980150  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:10.250493  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:10.269821  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:10.271102  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:10.356970  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:10.481755  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:10.749841  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:10.769711  397419 kapi.go:107] duration metric: took 1m7.503126792s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 08:40:10.770295  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:10.979832  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:11.250142  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:11.270431  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:11.480956  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:11.753496  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:11.770003  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:11.980475  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:12.250784  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:12.270813  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:12.357211  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:12.480873  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:12.751126  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:12.770604  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:12.979811  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:13.250139  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:13.270888  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:13.480241  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:13.750443  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:13.769994  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:13.979631  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:14.250829  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:14.270340  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:14.480298  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:14.750382  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:14.769880  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:14.857115  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:14.980593  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:15.250737  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:15.269909  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:15.480460  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:15.750879  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:15.770052  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:15.979744  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:16.251095  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:16.270338  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:16.480567  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:16.749687  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:16.770077  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.035489  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:17.250313  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:17.269943  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.356644  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:17.480054  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:17.750392  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:17.769702  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.980088  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:18.250474  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:18.269932  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:18.511698  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:18.750521  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:18.852675  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:18.979597  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:19.249859  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:19.270206  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:19.357692  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:19.480159  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:19.750104  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:19.771108  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:19.979504  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:20.251660  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:20.271175  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:20.480098  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:20.750670  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:20.770690  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:20.980839  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:21.250744  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:21.270685  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:21.357832  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:21.480348  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:21.750284  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:21.769821  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:21.981107  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:22.249898  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:22.270237  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:22.480433  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:22.750573  397419 kapi.go:107] duration metric: took 1m15.003789133s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 08:40:22.752532  397419 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-093168 cluster.
	I0917 08:40:22.753817  397419 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 08:40:22.755155  397419 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 08:40:22.769882  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:22.979715  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:23.270378  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:23.480884  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:23.770749  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:23.856903  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:23.979682  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:24.270418  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:24.481750  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:24.838546  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:24.979926  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:25.336387  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:25.536841  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:25.836400  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:25.857822  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:26.038227  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:26.270962  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:26.480310  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:26.769993  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:26.979717  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:27.270245  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:27.479626  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:27.770138  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:27.979728  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:28.270445  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:28.357521  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:28.479512  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:28.771302  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:28.980203  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:29.272777  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:29.479974  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:29.771290  397419 kapi.go:107] duration metric: took 1m26.505487302s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 08:40:30.036881  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:30.480783  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:30.856907  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:30.980652  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:31.480186  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:31.979880  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:32.481022  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:32.979408  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:33.357762  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:33.479779  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:33.979963  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:34.480525  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:34.980951  397419 kapi.go:107] duration metric: took 1m30.505737137s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 08:40:35.011214  397419 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, helm-tiller, nvidia-device-plugin, storage-provisioner, metrics-server, default-storageclass, inspektor-gadget, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0917 08:40:35.088827  397419 addons.go:510] duration metric: took 1m37.983731495s for enable addons: enabled=[cloud-spanner ingress-dns helm-tiller nvidia-device-plugin storage-provisioner metrics-server default-storageclass inspektor-gadget yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0917 08:40:35.963282  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:38.356952  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:40.357057  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:42.857137  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:45.357585  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:47.415219  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:49.856695  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:52.357369  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:54.856959  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:56.857573  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:59.356748  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:01.357311  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:03.857150  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:05.857298  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:08.356921  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:10.856637  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:12.857089  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:15.356886  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:17.357162  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:19.857088  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:21.857768  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:22.357225  397419 pod_ready.go:93] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"True"
	I0917 08:41:22.357248  397419 pod_ready.go:82] duration metric: took 1m38.50625923s for pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace to be "Ready" ...
	I0917 08:41:22.357261  397419 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fxm5v" in "kube-system" namespace to be "Ready" ...
	I0917 08:41:22.361581  397419 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fxm5v" in "kube-system" namespace has status "Ready":"True"
	I0917 08:41:22.361602  397419 pod_ready.go:82] duration metric: took 4.33393ms for pod "nvidia-device-plugin-daemonset-fxm5v" in "kube-system" namespace to be "Ready" ...
	I0917 08:41:22.361622  397419 pod_ready.go:39] duration metric: took 1m40.511686973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 08:41:22.361642  397419 api_server.go:52] waiting for apiserver process to appear ...
	I0917 08:41:22.361682  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 08:41:22.361731  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 08:41:22.396772  397419 cri.go:89] found id: "a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:22.396810  397419 cri.go:89] found id: ""
	I0917 08:41:22.396820  397419 logs.go:276] 1 containers: [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d]
	I0917 08:41:22.396885  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.401393  397419 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 08:41:22.401457  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 08:41:22.433869  397419 cri.go:89] found id: "498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:22.433890  397419 cri.go:89] found id: ""
	I0917 08:41:22.433898  397419 logs.go:276] 1 containers: [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126]
	I0917 08:41:22.433944  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.437332  397419 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 08:41:22.437407  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 08:41:22.472376  397419 cri.go:89] found id: "5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:22.472397  397419 cri.go:89] found id: ""
	I0917 08:41:22.472404  397419 logs.go:276] 1 containers: [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd]
	I0917 08:41:22.472448  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.475763  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 08:41:22.475824  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 08:41:22.509241  397419 cri.go:89] found id: "e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:22.509272  397419 cri.go:89] found id: ""
	I0917 08:41:22.509284  397419 logs.go:276] 1 containers: [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141]
	I0917 08:41:22.509335  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.512804  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 08:41:22.512865  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 08:41:22.546986  397419 cri.go:89] found id: "3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:22.547007  397419 cri.go:89] found id: ""
	I0917 08:41:22.547015  397419 logs.go:276] 1 containers: [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22]
	I0917 08:41:22.547060  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.550402  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 08:41:22.550459  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 08:41:22.584566  397419 cri.go:89] found id: "3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:22.584588  397419 cri.go:89] found id: ""
	I0917 08:41:22.584604  397419 logs.go:276] 1 containers: [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894]
	I0917 08:41:22.584655  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.588033  397419 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 08:41:22.588092  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 08:41:22.621636  397419 cri.go:89] found id: "c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:22.621662  397419 cri.go:89] found id: ""
	I0917 08:41:22.621672  397419 logs.go:276] 1 containers: [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7]
	I0917 08:41:22.621725  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.625177  397419 logs.go:123] Gathering logs for dmesg ...
	I0917 08:41:22.625207  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 08:41:22.651122  397419 logs.go:123] Gathering logs for describe nodes ...
	I0917 08:41:22.651158  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 08:41:22.750350  397419 logs.go:123] Gathering logs for kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] ...
	I0917 08:41:22.750382  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:22.794944  397419 logs.go:123] Gathering logs for etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] ...
	I0917 08:41:22.794981  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:22.847406  397419 logs.go:123] Gathering logs for kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] ...
	I0917 08:41:22.847443  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:22.882612  397419 logs.go:123] Gathering logs for kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] ...
	I0917 08:41:22.882647  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:22.938657  397419 logs.go:123] Gathering logs for container status ...
	I0917 08:41:22.938694  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 08:41:22.980301  397419 logs.go:123] Gathering logs for kubelet ...
	I0917 08:41:22.980332  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 08:41:23.057322  397419 logs.go:123] Gathering logs for coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] ...
	I0917 08:41:23.057359  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:23.092524  397419 logs.go:123] Gathering logs for kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] ...
	I0917 08:41:23.092557  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:23.129832  397419 logs.go:123] Gathering logs for kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] ...
	I0917 08:41:23.129871  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:23.165427  397419 logs.go:123] Gathering logs for CRI-O ...
	I0917 08:41:23.165458  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 08:41:25.744385  397419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 08:41:25.758404  397419 api_server.go:72] duration metric: took 2m28.653351209s to wait for apiserver process to appear ...
	I0917 08:41:25.758434  397419 api_server.go:88] waiting for apiserver healthz status ...
	I0917 08:41:25.758473  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 08:41:25.758517  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 08:41:25.791782  397419 cri.go:89] found id: "a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:25.791813  397419 cri.go:89] found id: ""
	I0917 08:41:25.791824  397419 logs.go:276] 1 containers: [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d]
	I0917 08:41:25.791876  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.795162  397419 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 08:41:25.795222  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 08:41:25.827605  397419 cri.go:89] found id: "498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:25.827632  397419 cri.go:89] found id: ""
	I0917 08:41:25.827642  397419 logs.go:276] 1 containers: [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126]
	I0917 08:41:25.827695  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.830956  397419 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 08:41:25.831016  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 08:41:25.864525  397419 cri.go:89] found id: "5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:25.864552  397419 cri.go:89] found id: ""
	I0917 08:41:25.864562  397419 logs.go:276] 1 containers: [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd]
	I0917 08:41:25.864628  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.867980  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 08:41:25.868042  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 08:41:25.901946  397419 cri.go:89] found id: "e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:25.901966  397419 cri.go:89] found id: ""
	I0917 08:41:25.901977  397419 logs.go:276] 1 containers: [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141]
	I0917 08:41:25.902026  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.905404  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 08:41:25.905458  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 08:41:25.938828  397419 cri.go:89] found id: "3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:25.938850  397419 cri.go:89] found id: ""
	I0917 08:41:25.938859  397419 logs.go:276] 1 containers: [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22]
	I0917 08:41:25.938905  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.942182  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 08:41:25.942243  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 08:41:25.975310  397419 cri.go:89] found id: "3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:25.975334  397419 cri.go:89] found id: ""
	I0917 08:41:25.975345  397419 logs.go:276] 1 containers: [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894]
	I0917 08:41:25.975405  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.978637  397419 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 08:41:25.978703  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 08:41:26.012169  397419 cri.go:89] found id: "c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:26.012190  397419 cri.go:89] found id: ""
	I0917 08:41:26.012200  397419 logs.go:276] 1 containers: [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7]
	I0917 08:41:26.012256  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:26.015540  397419 logs.go:123] Gathering logs for kubelet ...
	I0917 08:41:26.015562  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 08:41:26.093016  397419 logs.go:123] Gathering logs for kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] ...
	I0917 08:41:26.093054  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:26.136808  397419 logs.go:123] Gathering logs for etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] ...
	I0917 08:41:26.136847  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:26.188782  397419 logs.go:123] Gathering logs for kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] ...
	I0917 08:41:26.188814  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:26.226705  397419 logs.go:123] Gathering logs for kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] ...
	I0917 08:41:26.226736  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:26.259580  397419 logs.go:123] Gathering logs for CRI-O ...
	I0917 08:41:26.259609  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 08:41:26.335847  397419 logs.go:123] Gathering logs for container status ...
	I0917 08:41:26.335885  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 08:41:26.378206  397419 logs.go:123] Gathering logs for dmesg ...
	I0917 08:41:26.378237  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 08:41:26.404518  397419 logs.go:123] Gathering logs for describe nodes ...
	I0917 08:41:26.404550  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 08:41:26.508227  397419 logs.go:123] Gathering logs for coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] ...
	I0917 08:41:26.508263  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:26.543742  397419 logs.go:123] Gathering logs for kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] ...
	I0917 08:41:26.543777  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:26.600899  397419 logs.go:123] Gathering logs for kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] ...
	I0917 08:41:26.600938  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:29.138040  397419 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 08:41:29.142631  397419 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 08:41:29.143571  397419 api_server.go:141] control plane version: v1.31.1
	I0917 08:41:29.143606  397419 api_server.go:131] duration metric: took 3.385163598s to wait for apiserver health ...
	I0917 08:41:29.143621  397419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 08:41:29.143650  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 08:41:29.143699  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 08:41:29.178086  397419 cri.go:89] found id: "a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:29.178111  397419 cri.go:89] found id: ""
	I0917 08:41:29.178121  397419 logs.go:276] 1 containers: [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d]
	I0917 08:41:29.178180  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.181712  397419 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 08:41:29.181779  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 08:41:29.215733  397419 cri.go:89] found id: "498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:29.215755  397419 cri.go:89] found id: ""
	I0917 08:41:29.215763  397419 logs.go:276] 1 containers: [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126]
	I0917 08:41:29.215809  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.219058  397419 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 08:41:29.219111  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 08:41:29.252251  397419 cri.go:89] found id: "5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:29.252272  397419 cri.go:89] found id: ""
	I0917 08:41:29.252279  397419 logs.go:276] 1 containers: [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd]
	I0917 08:41:29.252321  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.255633  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 08:41:29.255688  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 08:41:29.289333  397419 cri.go:89] found id: "e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:29.289359  397419 cri.go:89] found id: ""
	I0917 08:41:29.289369  397419 logs.go:276] 1 containers: [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141]
	I0917 08:41:29.289423  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.292943  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 08:41:29.292996  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 08:41:29.326709  397419 cri.go:89] found id: "3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:29.326731  397419 cri.go:89] found id: ""
	I0917 08:41:29.326739  397419 logs.go:276] 1 containers: [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22]
	I0917 08:41:29.326799  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.330170  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 08:41:29.330226  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 08:41:29.363477  397419 cri.go:89] found id: "3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:29.363501  397419 cri.go:89] found id: ""
	I0917 08:41:29.363511  397419 logs.go:276] 1 containers: [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894]
	I0917 08:41:29.363567  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.366804  397419 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 08:41:29.366860  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 08:41:29.399852  397419 cri.go:89] found id: "c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:29.399872  397419 cri.go:89] found id: ""
	I0917 08:41:29.399881  397419 logs.go:276] 1 containers: [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7]
	I0917 08:41:29.399934  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.403233  397419 logs.go:123] Gathering logs for etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] ...
	I0917 08:41:29.403253  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:29.451453  397419 logs.go:123] Gathering logs for kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] ...
	I0917 08:41:29.451484  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:29.488951  397419 logs.go:123] Gathering logs for kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] ...
	I0917 08:41:29.488979  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:29.523572  397419 logs.go:123] Gathering logs for kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] ...
	I0917 08:41:29.523603  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:29.579709  397419 logs.go:123] Gathering logs for CRI-O ...
	I0917 08:41:29.579750  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 08:41:29.658415  397419 logs.go:123] Gathering logs for kubelet ...
	I0917 08:41:29.658455  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 08:41:29.735441  397419 logs.go:123] Gathering logs for dmesg ...
	I0917 08:41:29.735481  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 08:41:29.762124  397419 logs.go:123] Gathering logs for describe nodes ...
	I0917 08:41:29.762159  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 08:41:29.856247  397419 logs.go:123] Gathering logs for kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] ...
	I0917 08:41:29.856278  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:29.902365  397419 logs.go:123] Gathering logs for coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] ...
	I0917 08:41:29.902398  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:29.938050  397419 logs.go:123] Gathering logs for kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] ...
	I0917 08:41:29.938081  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:29.973223  397419 logs.go:123] Gathering logs for container status ...
	I0917 08:41:29.973251  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 08:41:32.526366  397419 system_pods.go:59] 19 kube-system pods found
	I0917 08:41:32.526399  397419 system_pods.go:61] "coredns-7c65d6cfc9-7lhft" [d955ab8f-33f3-4177-a7cf-29b7b9cc1102] Running
	I0917 08:41:32.526405  397419 system_pods.go:61] "csi-hostpath-attacher-0" [74cbb098-f189-44df-a4b9-3d4644fad690] Running
	I0917 08:41:32.526409  397419 system_pods.go:61] "csi-hostpath-resizer-0" [2d53c081-d93a-46a4-8b7b-29e15b9b485e] Running
	I0917 08:41:32.526413  397419 system_pods.go:61] "csi-hostpathplugin-lknd7" [3267ecfa-6ae5-4291-9944-574c0476e9ec] Running
	I0917 08:41:32.526416  397419 system_pods.go:61] "etcd-addons-093168" [a017480c-3ca0-477f-801b-630887a3efdd] Running
	I0917 08:41:32.526420  397419 system_pods.go:61] "kindnet-nvhtv" [2a27ef1d-01b4-4db6-9b83-51a2b2889bc2] Running
	I0917 08:41:32.526422  397419 system_pods.go:61] "kube-apiserver-addons-093168" [1b03826d-3f50-4a0c-a2ad-f8d354f0935a] Running
	I0917 08:41:32.526425  397419 system_pods.go:61] "kube-controller-manager-addons-093168" [2da0a6e2-49be-44c3-a463-463a9865310f] Running
	I0917 08:41:32.526428  397419 system_pods.go:61] "kube-ingress-dns-minikube" [236b5470-912c-4665-ae2a-0aeda61e0892] Running
	I0917 08:41:32.526432  397419 system_pods.go:61] "kube-proxy-t77c5" [76518769-e724-461e-8134-d120144d60a8] Running
	I0917 08:41:32.526436  397419 system_pods.go:61] "kube-scheduler-addons-093168" [8dbe178e-95a4-491e-a059-423f6b78f417] Running
	I0917 08:41:32.526441  397419 system_pods.go:61] "metrics-server-84c5f94fbc-bmr95" [48e9bb6a-e161-4bfe-a8e4-14f5b970e50c] Running
	I0917 08:41:32.526445  397419 system_pods.go:61] "nvidia-device-plugin-daemonset-fxm5v" [d00acbad-2301-4783-835a-f6133e77a22b] Running
	I0917 08:41:32.526450  397419 system_pods.go:61] "registry-66c9cd494c-8h9wm" [efc2db30-2af8-4cf7-a316-5dac4df4a136] Running
	I0917 08:41:32.526455  397419 system_pods.go:61] "registry-proxy-9plz8" [8bc41646-54c5-4d13-8d5f-bebcdc6f15ce] Running
	I0917 08:41:32.526461  397419 system_pods.go:61] "snapshot-controller-56fcc65765-md5h6" [ff141ee6-2569-49b0-8b1a-83d9a1a05178] Running
	I0917 08:41:32.526470  397419 system_pods.go:61] "snapshot-controller-56fcc65765-xdr22" [69737144-ad79-4db9-ae9c-e5575f580f48] Running
	I0917 08:41:32.526475  397419 system_pods.go:61] "storage-provisioner" [e20caa93-3db5-4d96-b8a8-7665d4f5437d] Running
	I0917 08:41:32.526483  397419 system_pods.go:61] "tiller-deploy-b48cc5f79-p6zds" [48ba15f8-54f5-410f-8c46-b15665532417] Running
	I0917 08:41:32.526493  397419 system_pods.go:74] duration metric: took 3.382863956s to wait for pod list to return data ...
	I0917 08:41:32.526503  397419 default_sa.go:34] waiting for default service account to be created ...
	I0917 08:41:32.529073  397419 default_sa.go:45] found service account: "default"
	I0917 08:41:32.529100  397419 default_sa.go:55] duration metric: took 2.584342ms for default service account to be created ...
	I0917 08:41:32.529110  397419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 08:41:32.539148  397419 system_pods.go:86] 19 kube-system pods found
	I0917 08:41:32.539179  397419 system_pods.go:89] "coredns-7c65d6cfc9-7lhft" [d955ab8f-33f3-4177-a7cf-29b7b9cc1102] Running
	I0917 08:41:32.539185  397419 system_pods.go:89] "csi-hostpath-attacher-0" [74cbb098-f189-44df-a4b9-3d4644fad690] Running
	I0917 08:41:32.539189  397419 system_pods.go:89] "csi-hostpath-resizer-0" [2d53c081-d93a-46a4-8b7b-29e15b9b485e] Running
	I0917 08:41:32.539193  397419 system_pods.go:89] "csi-hostpathplugin-lknd7" [3267ecfa-6ae5-4291-9944-574c0476e9ec] Running
	I0917 08:41:32.539196  397419 system_pods.go:89] "etcd-addons-093168" [a017480c-3ca0-477f-801b-630887a3efdd] Running
	I0917 08:41:32.539200  397419 system_pods.go:89] "kindnet-nvhtv" [2a27ef1d-01b4-4db6-9b83-51a2b2889bc2] Running
	I0917 08:41:32.539203  397419 system_pods.go:89] "kube-apiserver-addons-093168" [1b03826d-3f50-4a0c-a2ad-f8d354f0935a] Running
	I0917 08:41:32.539207  397419 system_pods.go:89] "kube-controller-manager-addons-093168" [2da0a6e2-49be-44c3-a463-463a9865310f] Running
	I0917 08:41:32.539210  397419 system_pods.go:89] "kube-ingress-dns-minikube" [236b5470-912c-4665-ae2a-0aeda61e0892] Running
	I0917 08:41:32.539213  397419 system_pods.go:89] "kube-proxy-t77c5" [76518769-e724-461e-8134-d120144d60a8] Running
	I0917 08:41:32.539216  397419 system_pods.go:89] "kube-scheduler-addons-093168" [8dbe178e-95a4-491e-a059-423f6b78f417] Running
	I0917 08:41:32.539220  397419 system_pods.go:89] "metrics-server-84c5f94fbc-bmr95" [48e9bb6a-e161-4bfe-a8e4-14f5b970e50c] Running
	I0917 08:41:32.539223  397419 system_pods.go:89] "nvidia-device-plugin-daemonset-fxm5v" [d00acbad-2301-4783-835a-f6133e77a22b] Running
	I0917 08:41:32.539227  397419 system_pods.go:89] "registry-66c9cd494c-8h9wm" [efc2db30-2af8-4cf7-a316-5dac4df4a136] Running
	I0917 08:41:32.539230  397419 system_pods.go:89] "registry-proxy-9plz8" [8bc41646-54c5-4d13-8d5f-bebcdc6f15ce] Running
	I0917 08:41:32.539235  397419 system_pods.go:89] "snapshot-controller-56fcc65765-md5h6" [ff141ee6-2569-49b0-8b1a-83d9a1a05178] Running
	I0917 08:41:32.539242  397419 system_pods.go:89] "snapshot-controller-56fcc65765-xdr22" [69737144-ad79-4db9-ae9c-e5575f580f48] Running
	I0917 08:41:32.539245  397419 system_pods.go:89] "storage-provisioner" [e20caa93-3db5-4d96-b8a8-7665d4f5437d] Running
	I0917 08:41:32.539248  397419 system_pods.go:89] "tiller-deploy-b48cc5f79-p6zds" [48ba15f8-54f5-410f-8c46-b15665532417] Running
	I0917 08:41:32.539255  397419 system_pods.go:126] duration metric: took 10.139894ms to wait for k8s-apps to be running ...
	I0917 08:41:32.539265  397419 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 08:41:32.539310  397419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 08:41:32.550663  397419 system_svc.go:56] duration metric: took 11.387952ms WaitForService to wait for kubelet
	I0917 08:41:32.550703  397419 kubeadm.go:582] duration metric: took 2m35.445654974s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 08:41:32.550732  397419 node_conditions.go:102] verifying NodePressure condition ...
	I0917 08:41:32.553809  397419 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 08:41:32.553834  397419 node_conditions.go:123] node cpu capacity is 8
	I0917 08:41:32.553851  397419 node_conditions.go:105] duration metric: took 3.112867ms to run NodePressure ...
	I0917 08:41:32.553869  397419 start.go:241] waiting for startup goroutines ...
	I0917 08:41:32.553875  397419 start.go:246] waiting for cluster config update ...
	I0917 08:41:32.553893  397419 start.go:255] writing updated cluster config ...
	I0917 08:41:32.554149  397419 ssh_runner.go:195] Run: rm -f paused
	I0917 08:41:32.604339  397419 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 08:41:32.606540  397419 out.go:177] * Done! kubectl is now configured to use "addons-093168" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 08:50:31 addons-093168 crio[1031]: time="2024-09-17 08:50:31.026934518Z" level=info msg="Image docker.io/nginx:alpine not found" id=0ebb0be0-c282-434d-b49d-3b58435e6bb0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:50:36 addons-093168 crio[1031]: time="2024-09-17 08:50:36.935467370Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=368965ed-bd07-4a8f-bc5e-cc943a3344e3 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:50:36 addons-093168 crio[1031]: time="2024-09-17 08:50:36.935696158Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=368965ed-bd07-4a8f-bc5e-cc943a3344e3 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:50:43 addons-093168 crio[1031]: time="2024-09-17 08:50:43.936190985Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=006a4124-d461-4ad0-ab10-86874ce3a9d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:50:43 addons-093168 crio[1031]: time="2024-09-17 08:50:43.936493153Z" level=info msg="Image docker.io/nginx:alpine not found" id=006a4124-d461-4ad0-ab10-86874ce3a9d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.205481257Z" level=info msg="Stopping container: 91bf8d9cb37639fd27750047c9ff0dc31aa15547e5e09fe2335f447264927cfb (timeout: 30s)" id=92dc8785-28c6-4f44-83ee-fde445d85b5a name=/runtime.v1.RuntimeService/StopContainer
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.232280618Z" level=info msg="Stopping container: 79312eb33364d01e9597a590c82e439187b9df51dcd079b2479913125df7f835 (timeout: 30s)" id=4a9ae40c-ad32-42f4-a7ef-b87da66f722f name=/runtime.v1.RuntimeService/StopContainer
	Sep 17 08:50:46 addons-093168 conmon[3988]: conmon 91bf8d9cb37639fd2775 <ninfo>: container 4000 exited with status 2
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.364762329Z" level=info msg="Stopped container 91bf8d9cb37639fd27750047c9ff0dc31aa15547e5e09fe2335f447264927cfb: kube-system/registry-66c9cd494c-8h9wm/registry" id=92dc8785-28c6-4f44-83ee-fde445d85b5a name=/runtime.v1.RuntimeService/StopContainer
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.365406259Z" level=info msg="Stopping pod sandbox: 4acc441777bf39f98153dcbc93ad8b8be5229954bab13aade278f6d977af04ea" id=67af0330-63ce-4f36-baa2-2a0ce84e040f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.365719493Z" level=info msg="Got pod network &{Name:registry-66c9cd494c-8h9wm Namespace:kube-system ID:4acc441777bf39f98153dcbc93ad8b8be5229954bab13aade278f6d977af04ea UID:efc2db30-2af8-4cf7-a316-5dac4df4a136 NetNS:/var/run/netns/ae1e91de-ec02-411b-810d-dca4f85f165f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.365908267Z" level=info msg="Deleting pod kube-system_registry-66c9cd494c-8h9wm from CNI network \"kindnet\" (type=ptp)"
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.374588546Z" level=info msg="Stopped container 79312eb33364d01e9597a590c82e439187b9df51dcd079b2479913125df7f835: kube-system/registry-proxy-9plz8/registry-proxy" id=4a9ae40c-ad32-42f4-a7ef-b87da66f722f name=/runtime.v1.RuntimeService/StopContainer
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.375116027Z" level=info msg="Stopping pod sandbox: 584a4a032f708fdcdff5e7082ec63c0bac0be0949c420aba788182bfd0a9ec42" id=97c664bc-938a-4e7c-8597-61b2533aab5f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.379404436Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-PRPHJYZOG2GBSFDW - [0:0]\n:KUBE-HP-O772BQVZ57WKOT5O - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-CQ2D5XRY55JZRP2S - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vgw4z_ingress-nginx_9db3795d-842f-4d73-8bce-d33636cbc4e8_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-CQ2D5XRY55JZRP2S\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vgw4z_ingress-nginx_9db3795d-842f-4d73-8bce-d33636cbc4e8_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-PRPHJYZOG2GBSFDW\n-A KUBE-HP-CQ2D5XRY55JZRP2S -s 10.244.0.21/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vgw4z_ingress-nginx_9db3795d-842f-4d73-8bce-d33636cbc4e8_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-CQ2D5XRY55JZRP2S -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vgw4z_ingress-nginx_9db3795d-842f-4d73-8
bce-d33636cbc4e8_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.21:443\n-A KUBE-HP-PRPHJYZOG2GBSFDW -s 10.244.0.21/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vgw4z_ingress-nginx_9db3795d-842f-4d73-8bce-d33636cbc4e8_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-PRPHJYZOG2GBSFDW -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-vgw4z_ingress-nginx_9db3795d-842f-4d73-8bce-d33636cbc4e8_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.21:80\n-X KUBE-HP-O772BQVZ57WKOT5O\nCOMMIT\n"
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.381852151Z" level=info msg="Closing host port tcp:5000"
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.383279323Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.383511901Z" level=info msg="Got pod network &{Name:registry-proxy-9plz8 Namespace:kube-system ID:584a4a032f708fdcdff5e7082ec63c0bac0be0949c420aba788182bfd0a9ec42 UID:8bc41646-54c5-4d13-8d5f-bebcdc6f15ce NetNS:/var/run/netns/e052980e-c0ca-4e87-8c04-1e228fdd3715 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.383673123Z" level=info msg="Deleting pod kube-system_registry-proxy-9plz8 from CNI network \"kindnet\" (type=ptp)"
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.405457926Z" level=info msg="Stopped pod sandbox: 4acc441777bf39f98153dcbc93ad8b8be5229954bab13aade278f6d977af04ea" id=67af0330-63ce-4f36-baa2-2a0ce84e040f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 17 08:50:46 addons-093168 crio[1031]: time="2024-09-17 08:50:46.441377789Z" level=info msg="Stopped pod sandbox: 584a4a032f708fdcdff5e7082ec63c0bac0be0949c420aba788182bfd0a9ec42" id=97c664bc-938a-4e7c-8597-61b2533aab5f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 17 08:50:47 addons-093168 crio[1031]: time="2024-09-17 08:50:47.066401477Z" level=info msg="Removing container: 91bf8d9cb37639fd27750047c9ff0dc31aa15547e5e09fe2335f447264927cfb" id=8dbac2f9-e507-4e8c-ba0f-bc4b9b5869bb name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 17 08:50:47 addons-093168 crio[1031]: time="2024-09-17 08:50:47.081573537Z" level=info msg="Removed container 91bf8d9cb37639fd27750047c9ff0dc31aa15547e5e09fe2335f447264927cfb: kube-system/registry-66c9cd494c-8h9wm/registry" id=8dbac2f9-e507-4e8c-ba0f-bc4b9b5869bb name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 17 08:50:47 addons-093168 crio[1031]: time="2024-09-17 08:50:47.084718409Z" level=info msg="Removing container: 79312eb33364d01e9597a590c82e439187b9df51dcd079b2479913125df7f835" id=192651fd-168f-4a5a-9525-54c0694c9d7e name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 17 08:50:47 addons-093168 crio[1031]: time="2024-09-17 08:50:47.099847241Z" level=info msg="Removed container 79312eb33364d01e9597a590c82e439187b9df51dcd079b2479913125df7f835: kube-system/registry-proxy-9plz8/registry-proxy" id=192651fd-168f-4a5a-9525-54c0694c9d7e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	3a3467f969d26       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            53 seconds ago      Exited              helper-pod                               0                   3f65b216fa601       helper-pod-create-pvc-4ce88798-9de6-4983-a132-fce6b160c6fe
	7f8693a93013e       98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31                                                                             55 seconds ago      Exited              helm-test                                0                   9c04ad20db784       helm-test
	502f78f1a7fe2       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            5 minutes ago       Exited              gadget                                   6                   3fa45a8d6d2ea       gadget-hrsgq
	0906bd347c6d5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	f64b5aebbe7dd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          10 minutes ago      Running             csi-provisioner                          0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	eba5434cab6ab       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            10 minutes ago      Running             liveness-probe                           0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	057ac2c02266d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           10 minutes ago      Running             hostpath                                 0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	a0cca87be1a6f       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             10 minutes ago      Running             controller                               0                   2ba51e0898663       ingress-nginx-controller-bc57996ff-vgw4z
	db9ecacd5aed6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                10 minutes ago      Running             node-driver-registrar                    0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	843e30f0a0cf8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 10 minutes ago      Running             gcp-auth                                 0                   2e75c3dc5c24b       gcp-auth-89d5ffd79-xhlm6
	2999275b8545c       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        10 minutes ago      Running             metrics-server                           0                   7951cf53f3ce5       metrics-server-84c5f94fbc-bmr95
	a53dfdb3b91a2       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              10 minutes ago      Running             csi-resizer                              0                   e4b2df5e4c60c       csi-hostpath-resizer-0
	a31591d3a75de       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             10 minutes ago      Running             local-path-provisioner                   0                   655c3c112fdda       local-path-provisioner-86d989889c-qkqjp
	221d8f80ce839       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             10 minutes ago      Running             csi-attacher                             0                   47552b94b1444       csi-hostpath-attacher-0
	12e5d8714fa59       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   10 minutes ago      Exited              patch                                    0                   4d5a9d109a211       ingress-nginx-admission-patch-pzmkp
	5859eff560d4d       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               10 minutes ago      Running             cloud-spanner-emulator                   0                   1c150e87d1045       cloud-spanner-emulator-769b77f747-qhw6c
	f921ee5175ec0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   10 minutes ago      Running             csi-external-health-monitor-controller   0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	a54dcb4e0840a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   10 minutes ago      Exited              create                                   0                   fc238c2462bf5       ingress-nginx-admission-create-4qdns
	b1aa0b4e6a00c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      10 minutes ago      Running             volume-snapshot-controller               0                   47f5d8b226a2a       snapshot-controller-56fcc65765-xdr22
	85332a0e5866e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      10 minutes ago      Running             volume-snapshot-controller               0                   f61171da5bfb1       snapshot-controller-56fcc65765-md5h6
	3300f395d8567       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             10 minutes ago      Running             minikube-ingress-dns                     0                   f7a1428432f34       kube-ingress-dns-minikube
	5eddba40afd11       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             11 minutes ago      Running             coredns                                  0                   ebe1938207849       coredns-7c65d6cfc9-7lhft
	6d7dbaef7a5cd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             11 minutes ago      Running             storage-provisioner                      0                   c9466fe8d518b       storage-provisioner
	3a8b894037793       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             11 minutes ago      Running             kube-proxy                               0                   eb334b9a5799a       kube-proxy-t77c5
	c9fa6b2ef5f0b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                                             11 minutes ago      Running             kindnet-cni                              0                   2e76c07fa96a5       kindnet-nvhtv
	e817293c644c7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             12 minutes ago      Running             kube-scheduler                           0                   a4765fe76b73a       kube-scheduler-addons-093168
	3521aa957963e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             12 minutes ago      Running             kube-controller-manager                  0                   2608552715e00       kube-controller-manager-addons-093168
	498509ee96967       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             12 minutes ago      Running             etcd                                     0                   62ce9ab109c53       etcd-addons-093168
	a2e61e738c0da       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             12 minutes ago      Running             kube-apiserver                           0                   bceb5d8367d07       kube-apiserver-addons-093168
	
	
	==> coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] <==
	[INFO] 10.244.0.11:33082 - 25853 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001192s
	[INFO] 10.244.0.11:37329 - 15527 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075609s
	[INFO] 10.244.0.11:37329 - 17316 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000121561s
	[INFO] 10.244.0.11:60250 - 35649 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005099659s
	[INFO] 10.244.0.11:60250 - 60739 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.006207516s
	[INFO] 10.244.0.11:37419 - 41998 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006428119s
	[INFO] 10.244.0.11:37419 - 39435 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006489964s
	[INFO] 10.244.0.11:56965 - 22146 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005110836s
	[INFO] 10.244.0.11:56965 - 41870 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005774722s
	[INFO] 10.244.0.11:40932 - 6018 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000055144s
	[INFO] 10.244.0.11:40932 - 2693 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093554s
	[INFO] 10.244.0.20:60603 - 21372 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000239521s
	[INFO] 10.244.0.20:56296 - 33744 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000369472s
	[INFO] 10.244.0.20:40076 - 30284 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123756s
	[INFO] 10.244.0.20:49639 - 52270 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158323s
	[INFO] 10.244.0.20:40994 - 1923 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099192s
	[INFO] 10.244.0.20:37435 - 32231 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000168193s
	[INFO] 10.244.0.20:36201 - 45290 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008885924s
	[INFO] 10.244.0.20:59898 - 55008 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008870022s
	[INFO] 10.244.0.20:43991 - 39302 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007846244s
	[INFO] 10.244.0.20:58304 - 34077 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008338334s
	[INFO] 10.244.0.20:34428 - 29339 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006763856s
	[INFO] 10.244.0.20:47732 - 9825 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007153268s
	[INFO] 10.244.0.20:52184 - 47443 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000802704s
	[INFO] 10.244.0.20:41521 - 18294 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000879797s
	
	
	==> describe nodes <==
	Name:               addons-093168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-093168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=addons-093168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T08_38_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-093168
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-093168"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 08:38:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-093168
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 08:50:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 08:50:25 +0000   Tue, 17 Sep 2024 08:38:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 08:50:25 +0000   Tue, 17 Sep 2024 08:38:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 08:50:25 +0000   Tue, 17 Sep 2024 08:38:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 08:50:25 +0000   Tue, 17 Sep 2024 08:39:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-093168
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fdb73868874fa2aa4322a27fc496be
	  System UUID:                7036efa9-bcf4-469e-8312-994f69eacc62
	  Boot ID:                    8c59a26b-5d0c-4753-9e88-ef03399e569b
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (25 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     cloud-spanner-emulator-769b77f747-qhw6c     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  default                     task-pv-pod-restore                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  default                     test-local-path                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  gadget                      gadget-hrsgq                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-xhlm6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vgw4z    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-7lhft                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-lknd7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-addons-093168                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-nvhtv                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-093168                250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-093168       200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-t77c5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-093168                100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-bmr95             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-md5h6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-xdr22        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-qkqjp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node addons-093168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node addons-093168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node addons-093168 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node addons-093168 event: Registered Node addons-093168 in Controller
	  Normal   NodeReady                11m   kubelet          Node addons-093168 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba ff 74 a1 5e 3b 08 06
	[ +13.302976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 08 54 46 b8 ba 08 06
	[  +0.000352] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ba ff 74 a1 5e 3b 08 06
	[Sep17 08:24] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a 24 b9 ac 9a ab 08 06
	[  +0.000405] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a b6 29 69 41 ca 08 06
	[ +18.455196] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 92 00 b0 ac cb 10 08 06
	[  +0.102770] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 8d 84 a2 25 2e 08 06
	[ +10.887970] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev cni0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[  +0.094820] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[Sep17 08:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 14 a2 f8 f7 06 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[ +21.407596] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 7a 9f 11 c8 01 08 06
	[  +0.000366] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 22 8d 84 a2 25 2e 08 06
	
	
	==> etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] <==
	{"level":"warn","ts":"2024-09-17T08:39:00.942463Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.989097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:00.944324Z","caller":"traceutil/trace.go:171","msg":"trace[1842649547] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:389; }","duration":"190.846782ms","start":"2024-09-17T08:39:00.753470Z","end":"2024-09-17T08:39:00.944317Z","steps":["trace[1842649547] 'agreement among raft nodes before linearized reading'  (duration: 188.978646ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:00.942492Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.715581ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-17T08:39:00.944484Z","caller":"traceutil/trace.go:171","msg":"trace[1814892135] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:389; }","duration":"195.704417ms","start":"2024-09-17T08:39:00.748773Z","end":"2024-09-17T08:39:00.944477Z","steps":["trace[1814892135] 'agreement among raft nodes before linearized reading'  (duration: 193.700916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:00.942519Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.799596ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-17T08:39:00.944656Z","caller":"traceutil/trace.go:171","msg":"trace[1494037761] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:389; }","duration":"195.932813ms","start":"2024-09-17T08:39:00.748716Z","end":"2024-09-17T08:39:00.944649Z","steps":["trace[1494037761] 'agreement among raft nodes before linearized reading'  (duration: 193.78917ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.236883Z","caller":"traceutil/trace.go:171","msg":"trace[1393862041] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"189.03103ms","start":"2024-09-17T08:39:01.047836Z","end":"2024-09-17T08:39:01.236868Z","steps":["trace[1393862041] 'process raft request'  (duration: 84.371141ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246393Z","caller":"traceutil/trace.go:171","msg":"trace[350871136] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"192.090658ms","start":"2024-09-17T08:39:01.054286Z","end":"2024-09-17T08:39:01.246377Z","steps":["trace[350871136] 'process raft request'  (duration: 192.056665ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246556Z","caller":"traceutil/trace.go:171","msg":"trace[288716589] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"192.561769ms","start":"2024-09-17T08:39:01.053978Z","end":"2024-09-17T08:39:01.246540Z","steps":["trace[288716589] 'process raft request'  (duration: 192.289701ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246589Z","caller":"traceutil/trace.go:171","msg":"trace[842047613] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"194.309372ms","start":"2024-09-17T08:39:01.052273Z","end":"2024-09-17T08:39:01.246583Z","steps":["trace[842047613] 'process raft request'  (duration: 193.860025ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246756Z","caller":"traceutil/trace.go:171","msg":"trace[874038599] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"192.611349ms","start":"2024-09-17T08:39:01.054136Z","end":"2024-09-17T08:39:01.246747Z","steps":["trace[874038599] 'process raft request'  (duration: 192.166716ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246789Z","caller":"traceutil/trace.go:171","msg":"trace[832402900] linearizableReadLoop","detail":"{readStateIndex:412; appliedIndex:412; }","duration":"107.196849ms","start":"2024-09-17T08:39:01.139584Z","end":"2024-09-17T08:39:01.246781Z","steps":["trace[832402900] 'read index received'  (duration: 107.193495ms)","trace[832402900] 'applied index is now lower than readState.Index'  (duration: 2.936µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T08:39:01.246842Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.242882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:01.247903Z","caller":"traceutil/trace.go:171","msg":"trace[1595279853] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:401; }","duration":"108.044342ms","start":"2024-09-17T08:39:01.139580Z","end":"2024-09-17T08:39:01.247624Z","steps":["trace[1595279853] 'agreement among raft nodes before linearized reading'  (duration: 107.221566ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:01.249317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.530022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:01.250846Z","caller":"traceutil/trace.go:171","msg":"trace[1335273238] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:407; }","duration":"111.069492ms","start":"2024-09-17T08:39:01.139765Z","end":"2024-09-17T08:39:01.250834Z","steps":["trace[1335273238] 'agreement among raft nodes before linearized reading'  (duration: 109.456626ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.249635Z","caller":"traceutil/trace.go:171","msg":"trace[134367931] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"109.798885ms","start":"2024-09-17T08:39:01.139825Z","end":"2024-09-17T08:39:01.249624Z","steps":["trace[134367931] 'process raft request'  (duration: 109.176303ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:01.250797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.892038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:01.251932Z","caller":"traceutil/trace.go:171","msg":"trace[1048075780] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:407; }","duration":"112.027319ms","start":"2024-09-17T08:39:01.139891Z","end":"2024-09-17T08:39:01.251919Z","steps":["trace[1048075780] 'agreement among raft nodes before linearized reading'  (duration: 110.877975ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:40:35.768719Z","caller":"traceutil/trace.go:171","msg":"trace[33144781] transaction","detail":"{read_only:false; response_revision:1201; number_of_response:1; }","duration":"100.543757ms","start":"2024-09-17T08:40:35.668147Z","end":"2024-09-17T08:40:35.768691Z","steps":["trace[33144781] 'process raft request'  (duration: 100.303667ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:40:35.958931Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.840736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-bmr95\" ","response":"range_response_count:1 size:4865"}
	{"level":"info","ts":"2024-09-17T08:40:35.958981Z","caller":"traceutil/trace.go:171","msg":"trace[13582332] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-bmr95; range_end:; response_count:1; response_revision:1201; }","duration":"105.907905ms","start":"2024-09-17T08:40:35.853062Z","end":"2024-09-17T08:40:35.958970Z","steps":["trace[13582332] 'range keys from in-memory index tree'  (duration: 105.71294ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:48:48.277449Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1537}
	{"level":"info","ts":"2024-09-17T08:48:48.301907Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1537,"took":"23.976999ms","hash":2118524458,"current-db-size-bytes":6434816,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3305472,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-17T08:48:48.301956Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2118524458,"revision":1537,"compact-revision":-1}
	
	
	==> gcp-auth [843e30f0a0cf860efc230a2a87deca3cc75d4f6408e31a84a0dd5b01df4dc08d] <==
	2024/09/17 08:41:32 Ready to write response ...
	2024/09/17 08:41:32 Ready to marshal response ...
	2024/09/17 08:41:32 Ready to write response ...
	2024/09/17 08:49:36 Ready to marshal response ...
	2024/09/17 08:49:36 Ready to write response ...
	2024/09/17 08:49:36 Ready to marshal response ...
	2024/09/17 08:49:36 Ready to write response ...
	2024/09/17 08:49:36 Ready to marshal response ...
	2024/09/17 08:49:36 Ready to write response ...
	2024/09/17 08:49:45 Ready to marshal response ...
	2024/09/17 08:49:45 Ready to write response ...
	2024/09/17 08:49:46 Ready to marshal response ...
	2024/09/17 08:49:46 Ready to write response ...
	2024/09/17 08:49:51 Ready to marshal response ...
	2024/09/17 08:49:51 Ready to write response ...
	2024/09/17 08:49:52 Ready to marshal response ...
	2024/09/17 08:49:52 Ready to write response ...
	2024/09/17 08:49:52 Ready to marshal response ...
	2024/09/17 08:49:52 Ready to write response ...
	2024/09/17 08:49:53 Ready to marshal response ...
	2024/09/17 08:49:53 Ready to write response ...
	2024/09/17 08:49:54 Ready to marshal response ...
	2024/09/17 08:49:54 Ready to write response ...
	2024/09/17 08:50:25 Ready to marshal response ...
	2024/09/17 08:50:25 Ready to write response ...
	
	
	==> kernel <==
	 08:50:47 up  2:33,  0 users,  load average: 0.21, 0.29, 0.69
	Linux addons-093168 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] <==
	I0917 08:48:41.149220       1 main.go:299] handling current node
	I0917 08:48:51.156046       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:48:51.156094       1 main.go:299] handling current node
	I0917 08:49:01.149092       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:49:01.149126       1 main.go:299] handling current node
	I0917 08:49:11.153622       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:49:11.153661       1 main.go:299] handling current node
	I0917 08:49:21.155739       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:49:21.155791       1 main.go:299] handling current node
	I0917 08:49:31.157184       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:49:31.157214       1 main.go:299] handling current node
	I0917 08:49:41.149667       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:49:41.149708       1 main.go:299] handling current node
	I0917 08:49:51.149483       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:49:51.149546       1 main.go:299] handling current node
	I0917 08:50:01.149360       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:50:01.149401       1 main.go:299] handling current node
	I0917 08:50:11.150363       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:50:11.150423       1 main.go:299] handling current node
	I0917 08:50:21.154095       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:50:21.154128       1 main.go:299] handling current node
	I0917 08:50:31.149467       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:50:31.149518       1 main.go:299] handling current node
	I0917 08:50:41.149121       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:50:41.149169       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] <==
	 > logger="UnhandledError"
	E0917 08:41:22.029288       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.221.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.221.184:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.221.184:443: connect: connection refused" logger="UnhandledError"
	W0917 08:41:23.031581       1 handler_proxy.go:99] no RequestInfo found in the context
	W0917 08:41:23.031606       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 08:41:23.031645       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0917 08:41:23.031691       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 08:41:23.032764       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 08:41:23.032787       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0917 08:41:27.038506       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.221.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.221.184:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W0917 08:41:27.038723       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 08:41:27.039088       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 08:41:27.049456       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0917 08:49:36.125694       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.199.141"}
	E0917 08:49:48.897202       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40012: use of closed network connection
	E0917 08:49:48.922992       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.25:41648: read: connection reset by peer
	E0917 08:49:53.964352       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0917 08:49:54.758375       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0917 08:49:54.934461       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.248.144"}
	I0917 08:50:05.538716       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] <==
	I0917 08:40:52.008906       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0917 08:40:52.033670       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0917 08:40:55.267760       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-093168"
	E0917 08:40:56.449598       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 08:40:56.860390       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 08:41:22.026183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="6.124054ms"
	I0917 08:41:22.026300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="69.757µs"
	E0917 08:41:26.455320       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0917 08:41:26.867286       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0917 08:46:01.712551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-093168"
	I0917 08:49:36.163780       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="22.896154ms"
	I0917 08:49:36.169153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="5.323432ms"
	I0917 08:49:36.169244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="51.904µs"
	I0917 08:49:36.171842       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="87.069µs"
	I0917 08:49:40.852952       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="55.154µs"
	I0917 08:49:40.872383       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="7.876527ms"
	I0917 08:49:40.872521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="78.76µs"
	I0917 08:49:41.053486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="11.529µs"
	I0917 08:49:46.654231       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="9.453µs"
	I0917 08:49:51.113100       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0917 08:49:54.449441       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="22.15µs"
	I0917 08:49:55.410875       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-093168"
	I0917 08:49:56.760619       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0917 08:50:25.617335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-093168"
	I0917 08:50:46.196796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.288µs"
	
	
	==> kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] <==
	I0917 08:39:00.642627       1 server_linux.go:66] "Using iptables proxy"
	I0917 08:39:01.648049       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 08:39:01.648220       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 08:39:02.034353       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 08:39:02.034507       1 server_linux.go:169] "Using iptables Proxier"
	I0917 08:39:02.043649       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 08:39:02.044366       1 server.go:483] "Version info" version="v1.31.1"
	I0917 08:39:02.044467       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 08:39:02.047306       1 config.go:199] "Starting service config controller"
	I0917 08:39:02.047353       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 08:39:02.047414       1 config.go:105] "Starting endpoint slice config controller"
	I0917 08:39:02.047425       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 08:39:02.048125       1 config.go:328] "Starting node config controller"
	I0917 08:39:02.048199       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 08:39:02.148044       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 08:39:02.148173       1 shared_informer.go:320] Caches are synced for service config
	I0917 08:39:02.150486       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] <==
	W0917 08:38:49.536513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0917 08:38:49.536752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0917 08:38:49.536844       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 08:38:49.536913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 08:38:49.537008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:49.536852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0917 08:38:49.537056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0917 08:38:49.536771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 08:38:49.537088       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536586       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 08:38:49.537126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 08:38:49.537153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.537194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0917 08:38:49.537194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 08:38:49.537213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0917 08:38:49.537222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:50.443859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 08:38:50.443910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:50.468561       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 08:38:50.468614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 08:38:50.759161       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 08:50:25 addons-093168 kubelet[1648]: I0917 08:50:25.955865    1648 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-56954405-8bfa-4174-b295-2b470eebf6ba\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^e1762dc9-74d1-11ef-8260-aea578a76d0f\") pod \"task-pv-pod-restore\" (UID: \"973c077a-45c1-4c85-bd62-419d8901a499\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/0b02f156f596d2710ee23dc53db0c7798e239ad0bbd3884358745c875a7b63c2/globalmount\"" pod="default/task-pv-pod-restore"
	Sep 17 08:50:30 addons-093168 kubelet[1648]: E0917 08:50:30.569567    1648 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 08:50:30 addons-093168 kubelet[1648]: E0917 08:50:30.569662    1648 kuberuntime_image.go:55] "Failed to pull image" err="determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 08:50:30 addons-093168 kubelet[1648]: E0917 08:50:30.569932    1648 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dd297,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:
,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_default(310f797d-f8e1-4d73-abe1-05f4dc832ecc): ErrImagePull: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 08:50:30 addons-093168 kubelet[1648]: E0917 08:50:30.571573    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="310f797d-f8e1-4d73-abe1-05f4dc832ecc"
	Sep 17 08:50:31 addons-093168 kubelet[1648]: E0917 08:50:31.027166    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="310f797d-f8e1-4d73-abe1-05f4dc832ecc"
	Sep 17 08:50:32 addons-093168 kubelet[1648]: E0917 08:50:32.192912    1648 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563032192671136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:50:32 addons-093168 kubelet[1648]: E0917 08:50:32.192948    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563032192671136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:50:36 addons-093168 kubelet[1648]: E0917 08:50:36.936034    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b6005bc-d2b8-4f48-bcf7-9878b2bf05d1"
	Sep 17 08:50:42 addons-093168 kubelet[1648]: E0917 08:50:42.195450    1648 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563042195202777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:50:42 addons-093168 kubelet[1648]: E0917 08:50:42.195483    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563042195202777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:50:46 addons-093168 kubelet[1648]: I0917 08:50:46.497531    1648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnfhr\" (UniqueName: \"kubernetes.io/projected/efc2db30-2af8-4cf7-a316-5dac4df4a136-kube-api-access-lnfhr\") pod \"efc2db30-2af8-4cf7-a316-5dac4df4a136\" (UID: \"efc2db30-2af8-4cf7-a316-5dac4df4a136\") "
	Sep 17 08:50:46 addons-093168 kubelet[1648]: I0917 08:50:46.499470    1648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efc2db30-2af8-4cf7-a316-5dac4df4a136-kube-api-access-lnfhr" (OuterVolumeSpecName: "kube-api-access-lnfhr") pod "efc2db30-2af8-4cf7-a316-5dac4df4a136" (UID: "efc2db30-2af8-4cf7-a316-5dac4df4a136"). InnerVolumeSpecName "kube-api-access-lnfhr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 08:50:46 addons-093168 kubelet[1648]: I0917 08:50:46.598125    1648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hq97\" (UniqueName: \"kubernetes.io/projected/8bc41646-54c5-4d13-8d5f-bebcdc6f15ce-kube-api-access-6hq97\") pod \"8bc41646-54c5-4d13-8d5f-bebcdc6f15ce\" (UID: \"8bc41646-54c5-4d13-8d5f-bebcdc6f15ce\") "
	Sep 17 08:50:46 addons-093168 kubelet[1648]: I0917 08:50:46.598290    1648 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lnfhr\" (UniqueName: \"kubernetes.io/projected/efc2db30-2af8-4cf7-a316-5dac4df4a136-kube-api-access-lnfhr\") on node \"addons-093168\" DevicePath \"\""
	Sep 17 08:50:46 addons-093168 kubelet[1648]: I0917 08:50:46.599849    1648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bc41646-54c5-4d13-8d5f-bebcdc6f15ce-kube-api-access-6hq97" (OuterVolumeSpecName: "kube-api-access-6hq97") pod "8bc41646-54c5-4d13-8d5f-bebcdc6f15ce" (UID: "8bc41646-54c5-4d13-8d5f-bebcdc6f15ce"). InnerVolumeSpecName "kube-api-access-6hq97". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 08:50:46 addons-093168 kubelet[1648]: I0917 08:50:46.698624    1648 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6hq97\" (UniqueName: \"kubernetes.io/projected/8bc41646-54c5-4d13-8d5f-bebcdc6f15ce-kube-api-access-6hq97\") on node \"addons-093168\" DevicePath \"\""
	Sep 17 08:50:47 addons-093168 kubelet[1648]: I0917 08:50:47.065409    1648 scope.go:117] "RemoveContainer" containerID="91bf8d9cb37639fd27750047c9ff0dc31aa15547e5e09fe2335f447264927cfb"
	Sep 17 08:50:47 addons-093168 kubelet[1648]: I0917 08:50:47.083062    1648 scope.go:117] "RemoveContainer" containerID="91bf8d9cb37639fd27750047c9ff0dc31aa15547e5e09fe2335f447264927cfb"
	Sep 17 08:50:47 addons-093168 kubelet[1648]: E0917 08:50:47.083499    1648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91bf8d9cb37639fd27750047c9ff0dc31aa15547e5e09fe2335f447264927cfb\": container with ID starting with 91bf8d9cb37639fd27750047c9ff0dc31aa15547e5e09fe2335f447264927cfb not found: ID does not exist" containerID="91bf8d9cb37639fd27750047c9ff0dc31aa15547e5e09fe2335f447264927cfb"
	Sep 17 08:50:47 addons-093168 kubelet[1648]: I0917 08:50:47.083545    1648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91bf8d9cb37639fd27750047c9ff0dc31aa15547e5e09fe2335f447264927cfb"} err="failed to get container status \"91bf8d9cb37639fd27750047c9ff0dc31aa15547e5e09fe2335f447264927cfb\": rpc error: code = NotFound desc = could not find container \"91bf8d9cb37639fd27750047c9ff0dc31aa15547e5e09fe2335f447264927cfb\": container with ID starting with 91bf8d9cb37639fd27750047c9ff0dc31aa15547e5e09fe2335f447264927cfb not found: ID does not exist"
	Sep 17 08:50:47 addons-093168 kubelet[1648]: I0917 08:50:47.083572    1648 scope.go:117] "RemoveContainer" containerID="79312eb33364d01e9597a590c82e439187b9df51dcd079b2479913125df7f835"
	Sep 17 08:50:47 addons-093168 kubelet[1648]: I0917 08:50:47.100154    1648 scope.go:117] "RemoveContainer" containerID="79312eb33364d01e9597a590c82e439187b9df51dcd079b2479913125df7f835"
	Sep 17 08:50:47 addons-093168 kubelet[1648]: E0917 08:50:47.100564    1648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79312eb33364d01e9597a590c82e439187b9df51dcd079b2479913125df7f835\": container with ID starting with 79312eb33364d01e9597a590c82e439187b9df51dcd079b2479913125df7f835 not found: ID does not exist" containerID="79312eb33364d01e9597a590c82e439187b9df51dcd079b2479913125df7f835"
	Sep 17 08:50:47 addons-093168 kubelet[1648]: I0917 08:50:47.100613    1648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79312eb33364d01e9597a590c82e439187b9df51dcd079b2479913125df7f835"} err="failed to get container status \"79312eb33364d01e9597a590c82e439187b9df51dcd079b2479913125df7f835\": rpc error: code = NotFound desc = could not find container \"79312eb33364d01e9597a590c82e439187b9df51dcd079b2479913125df7f835\": container with ID starting with 79312eb33364d01e9597a590c82e439187b9df51dcd079b2479913125df7f835 not found: ID does not exist"
	
	
	==> storage-provisioner [6d7dbaef7a5cdfbfc36d8383927eea1f42c07e4bc01e6aa61dd711665433a6d2] <==
	I0917 08:39:42.145412       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 08:39:42.155383       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 08:39:42.155443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 08:39:42.163576       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 08:39:42.163731       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e63dab40-9e98-4f4f-adef-1b218f507e90", APIVersion:"v1", ResourceVersion:"911", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-093168_95b1dd30-5446-4b97-a4d9-95691f11eb5b became leader
	I0917 08:39:42.163849       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-093168_95b1dd30-5446-4b97-a4d9-95691f11eb5b!
	I0917 08:39:42.264554       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-093168_95b1dd30-5446-4b97-a4d9-95691f11eb5b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-093168 -n addons-093168
helpers_test.go:261: (dbg) Run:  kubectl --context addons-093168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox nginx registry-test task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-093168 describe pod busybox nginx registry-test task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-093168 describe pod busybox nginx registry-test task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp: exit status 1 (117.079403ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:41:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gdp6f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gdp6f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m16s                   default-scheduler  Successfully assigned default/busybox to addons-093168
	  Normal   Pulling    7m44s (x4 over 9m15s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m44s (x4 over 9m15s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m44s (x4 over 9m15s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m32s (x6 over 9m15s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:49:54 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dd297 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dd297:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age               From               Message
	  ----     ------     ----              ----               -------
	  Normal   Scheduled  54s               default-scheduler  Successfully assigned default/nginx to addons-093168
	  Warning  Failed     18s               kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     18s               kubelet            Error: ErrImagePull
	  Normal   BackOff    17s               kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     17s               kubelet            Error: ImagePullBackOff
	  Normal   Pulling    5s (x2 over 53s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:                      registry-test
	Namespace:                 default
	Priority:                  0
	Service Account:           default
	Node:                      addons-093168/192.168.49.2
	Start Time:                Tue, 17 Sep 2024 08:49:45 +0000
	Labels:                    run=registry-test
	Annotations:               <none>
	Status:                    Terminating (lasts <invalid>)
	Termination Grace Period:  30s
	IP:                        10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  registry-test:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Args:
	      sh
	      -c
	      wget --spider -S http://registry.kube-system.svc.cluster.local
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qfqt2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qfqt2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  63s                default-scheduler  Successfully assigned default/registry-test to addons-093168
	  Warning  Failed     63s                kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     63s                kubelet            Error: ErrImagePull
	  Normal   BackOff    62s                kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox"
	  Warning  Failed     62s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    47s (x2 over 63s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox"
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:50:25 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gzwmm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-gzwmm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  23s   default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-093168
	  Normal  Pulling    22s   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:49:57 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9njfw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-9njfw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  51s   default-scheduler  Successfully assigned default/test-local-path to addons-093168
	  Normal  Pulling    49s   kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4qdns" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pzmkp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-093168 describe pod busybox nginx registry-test task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.03s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (482.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-093168 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-093168 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-093168 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [310f797d-f8e1-4d73-abe1-05f4dc832ecc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-093168 -n addons-093168
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2024-09-17 08:57:55.236354606 +0000 UTC m=+1192.518500518
addons_test.go:252: (dbg) Run:  kubectl --context addons-093168 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-093168 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-093168/192.168.49.2
Start Time:       Tue, 17 Sep 2024 08:49:54 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dd297 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dd297:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  8m1s                  default-scheduler  Successfully assigned default/nginx to addons-093168
Warning  Failed     7m25s                 kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    2m35s (x4 over 8m)    kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     100s (x4 over 7m25s)  kubelet            Error: ErrImagePull
Warning  Failed     100s (x3 over 5m22s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    65s (x7 over 7m24s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     65s (x7 over 7m24s)   kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-093168 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-093168 logs nginx -n default: exit status 1 (62.165456ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-093168 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-093168
helpers_test.go:235: (dbg) docker inspect addons-093168:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926",
	        "Created": "2024-09-17T08:38:37.745470595Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 398166,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-17T08:38:37.853843611Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/hostname",
	        "HostsPath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/hosts",
	        "LogPath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926-json.log",
	        "Name": "/addons-093168",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-093168:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-093168",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe-init/diff:/var/lib/docker/overlay2/22ea169b69b771958d5aa21d4886a5f67242c32d10a387f6aa1fe4f8feab18cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-093168",
	                "Source": "/var/lib/docker/volumes/addons-093168/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-093168",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-093168",
	                "name.minikube.sigs.k8s.io": "addons-093168",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a27331437cb7fe2f3918d4f21c6d0976e37e8d2fb43412d6ed2152b1f3b4fa1d",
	            "SandboxKey": "/var/run/docker/netns/a27331437cb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-093168": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "b1ff23e6ca5d5222d1d8818100c713ebb16a506c62eb4243a00007b105030e92",
	                    "EndpointID": "6cf14f071fae4cd24a1dac2c9e7c6dc188dcb38a38a4daaba6556d5caaa91067",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-093168",
	                        "f0cc99258b2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-093168 -n addons-093168
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-093168 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-093168 logs -n 25: (1.226572046s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-963544   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | -p download-only-963544              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-963544              | download-only-963544   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | -o=json --download-only              | download-only-223077   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | -p download-only-223077              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-223077              | download-only-223077   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-963544              | download-only-963544   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-223077              | download-only-223077   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | --download-only -p                   | download-docker-146413 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | download-docker-146413               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-146413            | download-docker-146413 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | --download-only -p                   | binary-mirror-713061   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | binary-mirror-713061                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45413               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-713061              | binary-mirror-713061   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| addons  | disable dashboard -p                 | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| start   | -p addons-093168 --wait=true         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:41 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | -p addons-093168                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | -p addons-093168                     |                        |         |         |                     |                     |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-093168 ip                     | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:53 UTC | 17 Sep 24 08:53 UTC |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:53 UTC | 17 Sep 24 08:53 UTC |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| addons  | addons-093168 addons                 | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:55 UTC | 17 Sep 24 08:55 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 08:38:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 08:38:14.268718  397419 out.go:345] Setting OutFile to fd 1 ...
	I0917 08:38:14.268997  397419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:14.269006  397419 out.go:358] Setting ErrFile to fd 2...
	I0917 08:38:14.269011  397419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:14.269250  397419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 08:38:14.269979  397419 out.go:352] Setting JSON to false
	I0917 08:38:14.270971  397419 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8443,"bootTime":1726553851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 08:38:14.271094  397419 start.go:139] virtualization: kvm guest
	I0917 08:38:14.273237  397419 out.go:177] * [addons-093168] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 08:38:14.274641  397419 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 08:38:14.274672  397419 notify.go:220] Checking for updates...
	I0917 08:38:14.276997  397419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 08:38:14.277996  397419 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 08:38:14.278999  397419 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	I0917 08:38:14.280101  397419 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 08:38:14.281266  397419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 08:38:14.282616  397419 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 08:38:14.304074  397419 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 08:38:14.304175  397419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:14.349142  397419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 08:38:14.340459492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:14.349250  397419 docker.go:318] overlay module found
	I0917 08:38:14.351082  397419 out.go:177] * Using the docker driver based on user configuration
	I0917 08:38:14.352358  397419 start.go:297] selected driver: docker
	I0917 08:38:14.352372  397419 start.go:901] validating driver "docker" against <nil>
	I0917 08:38:14.352389  397419 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 08:38:14.353172  397419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:14.398286  397419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 08:38:14.389900591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:14.398447  397419 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 08:38:14.398700  397419 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 08:38:14.400294  397419 out.go:177] * Using Docker driver with root privileges
	I0917 08:38:14.401571  397419 cni.go:84] Creating CNI manager for ""
	I0917 08:38:14.401650  397419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 08:38:14.401663  397419 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 08:38:14.401757  397419 start.go:340] cluster config:
	{Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 08:38:14.402986  397419 out.go:177] * Starting "addons-093168" primary control-plane node in "addons-093168" cluster
	I0917 08:38:14.404072  397419 cache.go:121] Beginning downloading kic base image for docker with crio
	I0917 08:38:14.405262  397419 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0917 08:38:14.406317  397419 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 08:38:14.406352  397419 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 08:38:14.406353  397419 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0917 08:38:14.406362  397419 cache.go:56] Caching tarball of preloaded images
	I0917 08:38:14.406475  397419 preload.go:172] Found /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 08:38:14.406487  397419 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 08:38:14.406819  397419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/config.json ...
	I0917 08:38:14.406838  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/config.json: {Name:mk614388e178da61bf05196ce91ed40880ae45f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:14.422815  397419 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0917 08:38:14.422934  397419 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0917 08:38:14.422949  397419 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0917 08:38:14.422954  397419 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0917 08:38:14.422960  397419 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0917 08:38:14.422968  397419 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0917 08:38:25.896345  397419 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0917 08:38:25.896393  397419 cache.go:194] Successfully downloaded all kic artifacts
	I0917 08:38:25.896448  397419 start.go:360] acquireMachinesLock for addons-093168: {Name:mkac87ef08cf18f2f3037d42f97e6975bc93fa09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 08:38:25.896575  397419 start.go:364] duration metric: took 100.043µs to acquireMachinesLock for "addons-093168"
	I0917 08:38:25.896610  397419 start.go:93] Provisioning new machine with config: &{Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 08:38:25.896717  397419 start.go:125] createHost starting for "" (driver="docker")
	I0917 08:38:25.898703  397419 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0917 08:38:25.898987  397419 start.go:159] libmachine.API.Create for "addons-093168" (driver="docker")
	I0917 08:38:25.899037  397419 client.go:168] LocalClient.Create starting
	I0917 08:38:25.899156  397419 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem
	I0917 08:38:26.182492  397419 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem
	I0917 08:38:26.297180  397419 cli_runner.go:164] Run: docker network inspect addons-093168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 08:38:26.312692  397419 cli_runner.go:211] docker network inspect addons-093168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 08:38:26.312773  397419 network_create.go:284] running [docker network inspect addons-093168] to gather additional debugging logs...
	I0917 08:38:26.312794  397419 cli_runner.go:164] Run: docker network inspect addons-093168
	W0917 08:38:26.328447  397419 cli_runner.go:211] docker network inspect addons-093168 returned with exit code 1
	I0917 08:38:26.328492  397419 network_create.go:287] error running [docker network inspect addons-093168]: docker network inspect addons-093168: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-093168 not found
	I0917 08:38:26.328507  397419 network_create.go:289] output of [docker network inspect addons-093168]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-093168 not found
	
	** /stderr **
	I0917 08:38:26.328630  397419 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 08:38:26.344660  397419 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b00bc0}
	I0917 08:38:26.344706  397419 network_create.go:124] attempt to create docker network addons-093168 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 08:38:26.344757  397419 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-093168 addons-093168
	I0917 08:38:26.403233  397419 network_create.go:108] docker network addons-093168 192.168.49.0/24 created
	I0917 08:38:26.403277  397419 kic.go:121] calculated static IP "192.168.49.2" for the "addons-093168" container
	I0917 08:38:26.403354  397419 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 08:38:26.419565  397419 cli_runner.go:164] Run: docker volume create addons-093168 --label name.minikube.sigs.k8s.io=addons-093168 --label created_by.minikube.sigs.k8s.io=true
	I0917 08:38:26.436382  397419 oci.go:103] Successfully created a docker volume addons-093168
	I0917 08:38:26.436456  397419 cli_runner.go:164] Run: docker run --rm --name addons-093168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093168 --entrypoint /usr/bin/test -v addons-093168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0917 08:38:33.360703  397419 cli_runner.go:217] Completed: docker run --rm --name addons-093168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093168 --entrypoint /usr/bin/test -v addons-093168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (6.924191678s)
	I0917 08:38:33.360734  397419 oci.go:107] Successfully prepared a docker volume addons-093168
	I0917 08:38:33.360748  397419 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 08:38:33.360770  397419 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 08:38:33.360820  397419 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-093168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 08:38:37.679996  397419 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-093168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.31913353s)
	I0917 08:38:37.680031  397419 kic.go:203] duration metric: took 4.319258144s to extract preloaded images to volume ...
	W0917 08:38:37.680167  397419 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0917 08:38:37.680264  397419 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 08:38:37.730224  397419 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-093168 --name addons-093168 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093168 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-093168 --network addons-093168 --ip 192.168.49.2 --volume addons-093168:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0917 08:38:38.015246  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Running}}
	I0917 08:38:38.033247  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:38.053229  397419 cli_runner.go:164] Run: docker exec addons-093168 stat /var/lib/dpkg/alternatives/iptables
	I0917 08:38:38.096763  397419 oci.go:144] the created container "addons-093168" has a running status.
	I0917 08:38:38.096799  397419 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa...
	I0917 08:38:38.316707  397419 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 08:38:38.338702  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:38.370614  397419 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 08:38:38.370640  397419 kic_runner.go:114] Args: [docker exec --privileged addons-093168 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 08:38:38.443014  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:38.468083  397419 machine.go:93] provisionDockerMachine start ...
	I0917 08:38:38.468181  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:38.487785  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:38.488024  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:38.488039  397419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 08:38:38.683369  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-093168
	
	I0917 08:38:38.683409  397419 ubuntu.go:169] provisioning hostname "addons-093168"
	I0917 08:38:38.683487  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:38.701314  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:38.701561  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:38.701586  397419 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-093168 && echo "addons-093168" | sudo tee /etc/hostname
	I0917 08:38:38.842294  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-093168
	
	I0917 08:38:38.842367  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:38.858454  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:38.858651  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:38.858675  397419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-093168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-093168/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-093168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 08:38:38.987912  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 08:38:38.987964  397419 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19648-389277/.minikube CaCertPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19648-389277/.minikube}
	I0917 08:38:38.988009  397419 ubuntu.go:177] setting up certificates
	I0917 08:38:38.988022  397419 provision.go:84] configureAuth start
	I0917 08:38:38.988088  397419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093168
	I0917 08:38:39.005336  397419 provision.go:143] copyHostCerts
	I0917 08:38:39.005415  397419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19648-389277/.minikube/key.pem (1679 bytes)
	I0917 08:38:39.005548  397419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19648-389277/.minikube/ca.pem (1082 bytes)
	I0917 08:38:39.005641  397419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19648-389277/.minikube/cert.pem (1123 bytes)
	I0917 08:38:39.005712  397419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19648-389277/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca-key.pem org=jenkins.addons-093168 san=[127.0.0.1 192.168.49.2 addons-093168 localhost minikube]
	I0917 08:38:39.090312  397419 provision.go:177] copyRemoteCerts
	I0917 08:38:39.090393  397419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 08:38:39.090456  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.106972  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.200856  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 08:38:39.222438  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 08:38:39.243612  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 08:38:39.265193  397419 provision.go:87] duration metric: took 277.150434ms to configureAuth
	I0917 08:38:39.265224  397419 ubuntu.go:193] setting minikube options for container-runtime
	I0917 08:38:39.265409  397419 config.go:182] Loaded profile config "addons-093168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 08:38:39.265521  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.282135  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:39.282384  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:39.282416  397419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 08:38:39.504192  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 08:38:39.504224  397419 machine.go:96] duration metric: took 1.036114607s to provisionDockerMachine
	I0917 08:38:39.504238  397419 client.go:171] duration metric: took 13.605190317s to LocalClient.Create
	I0917 08:38:39.504260  397419 start.go:167] duration metric: took 13.605271001s to libmachine.API.Create "addons-093168"
	I0917 08:38:39.504270  397419 start.go:293] postStartSetup for "addons-093168" (driver="docker")
	I0917 08:38:39.504289  397419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 08:38:39.504344  397419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 08:38:39.504394  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.522028  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.616778  397419 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 08:38:39.619852  397419 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 08:38:39.619881  397419 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 08:38:39.619889  397419 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 08:38:39.619897  397419 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0917 08:38:39.619908  397419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19648-389277/.minikube/addons for local assets ...
	I0917 08:38:39.619990  397419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19648-389277/.minikube/files for local assets ...
	I0917 08:38:39.620018  397419 start.go:296] duration metric: took 115.734968ms for postStartSetup
	I0917 08:38:39.620325  397419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093168
	I0917 08:38:39.637039  397419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/config.json ...
	I0917 08:38:39.637313  397419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 08:38:39.637369  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.653547  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.748768  397419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 08:38:39.752898  397419 start.go:128] duration metric: took 13.856163014s to createHost
	I0917 08:38:39.752925  397419 start.go:83] releasing machines lock for "addons-093168", held for 13.856335009s
	I0917 08:38:39.752987  397419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093168
	I0917 08:38:39.769324  397419 ssh_runner.go:195] Run: cat /version.json
	I0917 08:38:39.769390  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.769443  397419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 08:38:39.769521  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.786951  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.787867  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.941853  397419 ssh_runner.go:195] Run: systemctl --version
	I0917 08:38:39.946158  397419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 08:38:40.084473  397419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 08:38:40.088727  397419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 08:38:40.106449  397419 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 08:38:40.106528  397419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 08:38:40.132230  397419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 08:38:40.132261  397419 start.go:495] detecting cgroup driver to use...
	I0917 08:38:40.132294  397419 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 08:38:40.132351  397419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 08:38:40.146387  397419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 08:38:40.156232  397419 docker.go:217] disabling cri-docker service (if available) ...
	I0917 08:38:40.156282  397419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 08:38:40.168347  397419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 08:38:40.181162  397419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 08:38:40.257135  397419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 08:38:40.333605  397419 docker.go:233] disabling docker service ...
	I0917 08:38:40.333673  397419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 08:38:40.351601  397419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 08:38:40.362162  397419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 08:38:40.440587  397419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 08:38:40.525972  397419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 08:38:40.536529  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 08:38:40.551093  397419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 08:38:40.551153  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.559832  397419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 08:38:40.559898  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.568567  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.577380  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.585958  397419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 08:38:40.594312  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.603119  397419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.617231  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.626110  397419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 08:38:40.634005  397419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 08:38:40.641779  397419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:40.712061  397419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 08:38:40.806565  397419 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 08:38:40.806642  397419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 08:38:40.809970  397419 start.go:563] Will wait 60s for crictl version
	I0917 08:38:40.810032  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:38:40.812917  397419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 08:38:40.845887  397419 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 08:38:40.845982  397419 ssh_runner.go:195] Run: crio --version
	I0917 08:38:40.880638  397419 ssh_runner.go:195] Run: crio --version
	I0917 08:38:40.915800  397419 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0917 08:38:40.917229  397419 cli_runner.go:164] Run: docker network inspect addons-093168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 08:38:40.933605  397419 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 08:38:40.937163  397419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 08:38:40.947226  397419 kubeadm.go:883] updating cluster {Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 08:38:40.947379  397419 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 08:38:40.947455  397419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 08:38:41.008460  397419 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 08:38:41.008482  397419 crio.go:433] Images already preloaded, skipping extraction
	I0917 08:38:41.008524  397419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 08:38:41.040345  397419 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 08:38:41.040370  397419 cache_images.go:84] Images are preloaded, skipping loading
	I0917 08:38:41.040378  397419 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0917 08:38:41.040480  397419 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-093168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 08:38:41.040565  397419 ssh_runner.go:195] Run: crio config
	I0917 08:38:41.080761  397419 cni.go:84] Creating CNI manager for ""
	I0917 08:38:41.080783  397419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 08:38:41.080795  397419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 08:38:41.080819  397419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-093168 NodeName:addons-093168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 08:38:41.080967  397419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-093168"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 08:38:41.081023  397419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 08:38:41.089456  397419 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 08:38:41.089531  397419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 08:38:41.097438  397419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 08:38:41.113372  397419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 08:38:41.129326  397419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0917 08:38:41.144885  397419 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0917 08:38:41.147998  397419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 08:38:41.157624  397419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:41.237475  397419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 08:38:41.249661  397419 certs.go:68] Setting up /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168 for IP: 192.168.49.2
	I0917 08:38:41.249683  397419 certs.go:194] generating shared ca certs ...
	I0917 08:38:41.249699  397419 certs.go:226] acquiring lock for ca certs: {Name:mk8da29d5216ae8373400245c621790543881904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.249825  397419 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key
	I0917 08:38:41.614404  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt ...
	I0917 08:38:41.614440  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt: {Name:mkd45d6a60b00dd159e65c0f1b6c2e5a8afabc01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.614666  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key ...
	I0917 08:38:41.614685  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key: {Name:mk5291de481583f940222c6612a96e62ccd87eec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.614788  397419 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key
	I0917 08:38:41.754351  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.crt ...
	I0917 08:38:41.754383  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.crt: {Name:mk27ce36d6db90e160bdb0276068ed953effdbf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.754586  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key ...
	I0917 08:38:41.754606  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key: {Name:mk3afa86519521f4fca302906407d013abfb0d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.754709  397419 certs.go:256] generating profile certs ...
	I0917 08:38:41.754798  397419 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.key
	I0917 08:38:41.754829  397419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt with IP's: []
	I0917 08:38:42.064154  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt ...
	I0917 08:38:42.064185  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: {Name:mk5cb5afe904908b0cba1bf17d824eee5c984153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.064362  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.key ...
	I0917 08:38:42.064377  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.key: {Name:mkf2e14b11acd2448049e231dd4ead7716664bd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.064476  397419 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d
	I0917 08:38:42.064507  397419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0917 08:38:42.261028  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d ...
	I0917 08:38:42.261067  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d: {Name:mk077ce39ea3bb757e6d6ad979b544d7da0b437c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.261244  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d ...
	I0917 08:38:42.261257  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d: {Name:mk33433d67eea38775352092fed9c6a72038761a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.261329  397419 certs.go:381] copying /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d -> /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt
	I0917 08:38:42.261432  397419 certs.go:385] copying /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d -> /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key
	I0917 08:38:42.261485  397419 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key
	I0917 08:38:42.261504  397419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt with IP's: []
	I0917 08:38:42.508375  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt ...
	I0917 08:38:42.508413  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt: {Name:mk89431354833730cad316e358f6ad32f98671ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.508622  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key ...
	I0917 08:38:42.508638  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key: {Name:mk49266541348c002ddfe954fcac3e31b23d5e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.508851  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 08:38:42.508900  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem (1082 bytes)
	I0917 08:38:42.508938  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem (1123 bytes)
	I0917 08:38:42.508966  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/key.pem (1679 bytes)
	I0917 08:38:42.509614  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 08:38:42.532076  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 08:38:42.553868  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 08:38:42.575679  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 08:38:42.597095  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 08:38:42.618358  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 08:38:42.639563  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 08:38:42.660637  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 08:38:42.681627  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 08:38:42.702968  397419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 08:38:42.718889  397419 ssh_runner.go:195] Run: openssl version
	I0917 08:38:42.724037  397419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 08:38:42.732397  397419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:42.735486  397419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:42.735536  397419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:42.741586  397419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 08:38:42.749881  397419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 08:38:42.752874  397419 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 08:38:42.752930  397419 kubeadm.go:392] StartCluster: {Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 08:38:42.753025  397419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 08:38:42.753085  397419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 08:38:42.786903  397419 cri.go:89] found id: ""
	I0917 08:38:42.786985  397419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 08:38:42.796179  397419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 08:38:42.804749  397419 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 08:38:42.804799  397419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 08:38:42.812984  397419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 08:38:42.813000  397419 kubeadm.go:157] found existing configuration files:
	
	I0917 08:38:42.813037  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 08:38:42.820866  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 08:38:42.820930  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 08:38:42.828240  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 08:38:42.835643  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 08:38:42.835737  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 08:38:42.843259  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 08:38:42.851080  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 08:38:42.851131  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 08:38:42.858437  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 08:38:42.866098  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 08:38:42.866156  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 08:38:42.873252  397419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 08:38:42.908386  397419 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 08:38:42.908464  397419 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 08:38:42.923732  397419 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 08:38:42.923800  397419 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0917 08:38:42.923834  397419 kubeadm.go:310] OS: Linux
	I0917 08:38:42.923879  397419 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 08:38:42.923964  397419 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0917 08:38:42.924025  397419 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 08:38:42.924093  397419 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 08:38:42.924167  397419 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 08:38:42.924236  397419 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 08:38:42.924302  397419 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 08:38:42.924375  397419 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 08:38:42.924442  397419 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0917 08:38:42.973444  397419 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 08:38:42.973610  397419 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 08:38:42.973749  397419 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 08:38:42.979391  397419 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 08:38:42.982351  397419 out.go:235]   - Generating certificates and keys ...
	I0917 08:38:42.982445  397419 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 08:38:42.982558  397419 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 08:38:43.304222  397419 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 08:38:43.356991  397419 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 08:38:43.472470  397419 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 08:38:43.631625  397419 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 08:38:43.778369  397419 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 08:38:43.778571  397419 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-093168 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 08:38:44.236292  397419 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 08:38:44.236448  397419 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-093168 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 08:38:44.386759  397419 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 08:38:44.547662  397419 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 08:38:45.256381  397419 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 08:38:45.256470  397419 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 08:38:45.352447  397419 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 08:38:45.496534  397419 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 08:38:45.783093  397419 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 08:38:45.948400  397419 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 08:38:46.126268  397419 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 08:38:46.126739  397419 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 08:38:46.129290  397419 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 08:38:46.131498  397419 out.go:235]   - Booting up control plane ...
	I0917 08:38:46.131624  397419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 08:38:46.131735  397419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 08:38:46.131825  397419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 08:38:46.139890  397419 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 08:38:46.145973  397419 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 08:38:46.146041  397419 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 08:38:46.229694  397419 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 08:38:46.229838  397419 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 08:38:46.732374  397419 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.404175ms
	I0917 08:38:46.732502  397419 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 08:38:51.232483  397419 kubeadm.go:310] [api-check] The API server is healthy after 4.501470708s
	I0917 08:38:51.243357  397419 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 08:38:51.254150  397419 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 08:38:51.272346  397419 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 08:38:51.272569  397419 kubeadm.go:310] [mark-control-plane] Marking the node addons-093168 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 08:38:51.279966  397419 kubeadm.go:310] [bootstrap-token] Using token: k80no8.z164l1wfcaclt3ve
	I0917 08:38:51.281525  397419 out.go:235]   - Configuring RBAC rules ...
	I0917 08:38:51.281680  397419 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 08:38:51.284683  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 08:38:51.290003  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 08:38:51.293675  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 08:38:51.296125  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 08:38:51.298653  397419 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 08:38:51.638681  397419 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 08:38:52.057839  397419 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 08:38:52.638211  397419 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 08:38:52.639067  397419 kubeadm.go:310] 
	I0917 08:38:52.639151  397419 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 08:38:52.639161  397419 kubeadm.go:310] 
	I0917 08:38:52.639256  397419 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 08:38:52.639296  397419 kubeadm.go:310] 
	I0917 08:38:52.639346  397419 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 08:38:52.639417  397419 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 08:38:52.639470  397419 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 08:38:52.639478  397419 kubeadm.go:310] 
	I0917 08:38:52.639522  397419 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 08:38:52.639529  397419 kubeadm.go:310] 
	I0917 08:38:52.639568  397419 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 08:38:52.639593  397419 kubeadm.go:310] 
	I0917 08:38:52.639638  397419 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 08:38:52.639707  397419 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 08:38:52.639770  397419 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 08:38:52.639776  397419 kubeadm.go:310] 
	I0917 08:38:52.639844  397419 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 08:38:52.639938  397419 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 08:38:52.639972  397419 kubeadm.go:310] 
	I0917 08:38:52.640081  397419 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k80no8.z164l1wfcaclt3ve \
	I0917 08:38:52.640203  397419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:df9ded58c525a6d55df91cd644932b8a694d03f6beda3e691beb74ea1851cf09 \
	I0917 08:38:52.640238  397419 kubeadm.go:310] 	--control-plane 
	I0917 08:38:52.640248  397419 kubeadm.go:310] 
	I0917 08:38:52.640345  397419 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 08:38:52.640356  397419 kubeadm.go:310] 
	I0917 08:38:52.640453  397419 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k80no8.z164l1wfcaclt3ve \
	I0917 08:38:52.640571  397419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:df9ded58c525a6d55df91cd644932b8a694d03f6beda3e691beb74ea1851cf09 
	I0917 08:38:52.642642  397419 kubeadm.go:310] W0917 08:38:42.905770    1305 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 08:38:52.643061  397419 kubeadm.go:310] W0917 08:38:42.906409    1305 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 08:38:52.643311  397419 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0917 08:38:52.643438  397419 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 08:38:52.643454  397419 cni.go:84] Creating CNI manager for ""
	I0917 08:38:52.643464  397419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 08:38:52.645324  397419 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0917 08:38:52.646624  397419 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 08:38:52.650315  397419 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0917 08:38:52.650335  397419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 08:38:52.667218  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 08:38:52.889823  397419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 08:38:52.889885  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:52.889918  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-093168 minikube.k8s.io/updated_at=2024_09_17T08_38_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61 minikube.k8s.io/name=addons-093168 minikube.k8s.io/primary=true
	I0917 08:38:52.897123  397419 ops.go:34] apiserver oom_adj: -16
	I0917 08:38:53.039509  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:53.539727  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:54.039909  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:54.539969  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:55.040209  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:55.540163  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:56.039997  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:56.540545  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:57.039787  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:57.104143  397419 kubeadm.go:1113] duration metric: took 4.214320429s to wait for elevateKubeSystemPrivileges
	I0917 08:38:57.104195  397419 kubeadm.go:394] duration metric: took 14.351272056s to StartCluster
	I0917 08:38:57.104218  397419 settings.go:142] acquiring lock: {Name:mk95cfba95882d4e40150b5e054772c8fe045040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:57.104356  397419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 08:38:57.104769  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/kubeconfig: {Name:mk341f12644f68f3679935ee94cc49d156e11570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:57.105015  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 08:38:57.105016  397419 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 08:38:57.105108  397419 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 08:38:57.105239  397419 config.go:182] Loaded profile config "addons-093168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 08:38:57.105256  397419 addons.go:69] Setting cloud-spanner=true in profile "addons-093168"
	I0917 08:38:57.105271  397419 addons.go:69] Setting gcp-auth=true in profile "addons-093168"
	I0917 08:38:57.105277  397419 addons.go:234] Setting addon cloud-spanner=true in "addons-093168"
	I0917 08:38:57.105276  397419 addons.go:69] Setting storage-provisioner=true in profile "addons-093168"
	I0917 08:38:57.105278  397419 addons.go:69] Setting volumesnapshots=true in profile "addons-093168"
	I0917 08:38:57.105298  397419 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-093168"
	I0917 08:38:57.105238  397419 addons.go:69] Setting yakd=true in profile "addons-093168"
	I0917 08:38:57.105296  397419 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-093168"
	I0917 08:38:57.105312  397419 addons.go:69] Setting registry=true in profile "addons-093168"
	I0917 08:38:57.105312  397419 addons.go:234] Setting addon volumesnapshots=true in "addons-093168"
	I0917 08:38:57.105317  397419 addons.go:234] Setting addon yakd=true in "addons-093168"
	I0917 08:38:57.105321  397419 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-093168"
	I0917 08:38:57.105323  397419 addons.go:69] Setting helm-tiller=true in profile "addons-093168"
	I0917 08:38:57.105332  397419 addons.go:69] Setting metrics-server=true in profile "addons-093168"
	I0917 08:38:57.105335  397419 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-093168"
	I0917 08:38:57.105259  397419 addons.go:69] Setting volcano=true in profile "addons-093168"
	I0917 08:38:57.105344  397419 addons.go:234] Setting addon metrics-server=true in "addons-093168"
	I0917 08:38:57.105245  397419 addons.go:69] Setting inspektor-gadget=true in profile "addons-093168"
	I0917 08:38:57.105347  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105351  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105324  397419 addons.go:234] Setting addon registry=true in "addons-093168"
	I0917 08:38:57.105357  397419 addons.go:234] Setting addon inspektor-gadget=true in "addons-093168"
	I0917 08:38:57.105362  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105353  397419 addons.go:234] Setting addon volcano=true in "addons-093168"
	I0917 08:38:57.105486  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105291  397419 mustload.go:65] Loading cluster: addons-093168
	I0917 08:38:57.105336  397419 addons.go:234] Setting addon helm-tiller=true in "addons-093168"
	I0917 08:38:57.105608  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105707  397419 config.go:182] Loaded profile config "addons-093168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 08:38:57.105371  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105931  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105935  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105960  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105377  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.106050  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.106193  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105960  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.106458  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.106627  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105313  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105345  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.107248  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105376  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105250  397419 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-093168"
	I0917 08:38:57.108052  397419 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-093168"
	I0917 08:38:57.108362  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105380  397419 addons.go:69] Setting default-storageclass=true in profile "addons-093168"
	I0917 08:38:57.105302  397419 addons.go:234] Setting addon storage-provisioner=true in "addons-093168"
	I0917 08:38:57.105388  397419 addons.go:69] Setting ingress-dns=true in profile "addons-093168"
	I0917 08:38:57.105386  397419 addons.go:69] Setting ingress=true in profile "addons-093168"
	I0917 08:38:57.108644  397419 addons.go:234] Setting addon ingress=true in "addons-093168"
	I0917 08:38:57.108680  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.108700  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.108747  397419 addons.go:234] Setting addon ingress-dns=true in "addons-093168"
	I0917 08:38:57.108788  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.108821  397419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-093168"
	I0917 08:38:57.112690  397419 out.go:177] * Verifying Kubernetes components...
	I0917 08:38:57.114189  397419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:57.124402  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.124402  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.124587  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.125036  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.125084  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.125993  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.143502  397419 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 08:38:57.144872  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 08:38:57.144901  397419 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 08:38:57.144980  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	W0917 08:38:57.150681  397419 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0917 08:38:57.153691  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.155722  397419 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0917 08:38:57.159231  397419 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 08:38:57.159256  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0917 08:38:57.159314  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.172289  397419 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 08:38:57.176642  397419 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 08:38:57.176666  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 08:38:57.176733  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.193988  397419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 08:38:57.196004  397419 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 08:38:57.197115  397419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 08:38:57.197136  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 08:38:57.197200  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.202125  397419 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 08:38:57.203455  397419 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 08:38:57.203530  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 08:38:57.203679  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.204660  397419 addons.go:234] Setting addon default-storageclass=true in "addons-093168"
	I0917 08:38:57.204707  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.204824  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 08:38:57.205196  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.207284  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:38:57.207449  397419 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 08:38:57.208612  397419 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 08:38:57.208633  397419 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 08:38:57.208701  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.208883  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:38:57.210517  397419 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 08:38:57.210538  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 08:38:57.210595  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.210853  397419 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 08:38:57.212148  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 08:38:57.212167  397419 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 08:38:57.212221  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.216414  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.219236  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 08:38:57.221033  397419 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-093168"
	I0917 08:38:57.221085  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.221137  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 08:38:57.221157  397419 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 08:38:57.221227  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.221586  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.221963  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 08:38:57.223885  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 08:38:57.225253  397419 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 08:38:57.226499  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 08:38:57.226722  397419 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 08:38:57.226737  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 08:38:57.226802  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.229771  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 08:38:57.229842  397419 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 08:38:57.231204  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 08:38:57.231925  397419 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 08:38:57.231954  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 08:38:57.232015  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.240168  397419 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 08:38:57.240188  397419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 08:38:57.240249  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.251934  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.253019  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.256107  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.256961  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 08:38:57.270556  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 08:38:57.272877  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 08:38:57.274130  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 08:38:57.274138  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.274160  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 08:38:57.274232  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.286114  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.286432  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.286552  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.287928  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.292989  397419 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 08:38:57.293246  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.295525  397419 out.go:177]   - Using image docker.io/busybox:stable
	I0917 08:38:57.295767  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.297062  397419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 08:38:57.297077  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 08:38:57.297117  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.299372  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.306226  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.314733  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	W0917 08:38:57.337065  397419 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 08:38:57.337105  397419 retry.go:31] will retry after 135.437372ms: ssh: handshake failed: EOF
	I0917 08:38:57.346335  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 08:38:57.356789  397419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 08:38:57.538116  397419 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 08:38:57.538148  397419 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0917 08:38:57.541546  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 08:38:57.642930  397419 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 08:38:57.642961  397419 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0917 08:38:57.652875  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 08:38:57.652902  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 08:38:57.744251  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 08:38:57.752468  397419 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 08:38:57.752499  397419 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 08:38:57.753674  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 08:38:57.753698  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 08:38:57.833558  397419 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 08:38:57.833662  397419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 08:38:57.834064  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 08:38:57.835232  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 08:38:57.842341  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 08:38:57.842375  397419 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 08:38:57.849540  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 08:38:57.853917  397419 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 08:38:57.853947  397419 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 08:38:57.936443  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 08:38:57.936758  397419 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 08:38:57.936784  397419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 08:38:57.938952  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 08:38:57.941233  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 08:38:57.941258  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 08:38:58.033712  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 08:38:58.034229  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 08:38:58.034295  397419 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 08:38:58.046437  397419 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 08:38:58.046529  397419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 08:38:58.047136  397419 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 08:38:58.047196  397419 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 08:38:58.133693  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 08:38:58.133782  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 08:38:58.139956  397419 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 08:38:58.139985  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 08:38:58.233802  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 08:38:58.233848  397419 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 08:38:58.252638  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 08:38:58.252687  397419 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 08:38:58.254386  397419 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 08:38:58.254464  397419 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 08:38:58.333784  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 08:38:58.333878  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 08:38:58.449224  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 08:38:58.449259  397419 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 08:38:58.449658  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 08:38:58.548889  397419 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:38:58.548923  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 08:38:58.633498  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 08:38:58.633532  397419 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 08:38:58.633842  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 08:38:58.633864  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 08:38:58.634541  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 08:38:58.750791  397419 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 08:38:58.750827  397419 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 08:38:58.936229  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:38:59.233524  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 08:38:59.233625  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 08:38:59.333560  397419 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 08:38:59.333595  397419 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 08:38:59.653548  397419 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 08:38:59.653582  397419 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 08:38:59.654019  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 08:38:59.654039  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 08:38:59.750974  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 08:38:59.844245  397419 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.497868768s)
	I0917 08:38:59.844279  397419 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0917 08:38:59.845507  397419 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.48868759s)
	I0917 08:38:59.846428  397419 node_ready.go:35] waiting up to 6m0s for node "addons-093168" to be "Ready" ...
	I0917 08:39:00.150766  397419 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 08:39:00.150864  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 08:39:00.241261  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 08:39:00.241385  397419 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 08:39:00.434396  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 08:39:00.434751  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 08:39:00.434837  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 08:39:00.550189  397419 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-093168" context rescaled to 1 replicas
	I0917 08:39:00.748755  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 08:39:00.748843  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 08:39:00.937410  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 08:39:00.937442  397419 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 08:39:01.233803  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 08:39:01.943544  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:03.261179  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.719582492s)
	I0917 08:39:03.261217  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.516878812s)
	I0917 08:39:03.261224  397419 addons.go:475] Verifying addon ingress=true in "addons-093168"
	I0917 08:39:03.261298  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.427173682s)
	I0917 08:39:03.261369  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.426103213s)
	I0917 08:39:03.261406  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.411830401s)
	I0917 08:39:03.261493  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.325021448s)
	I0917 08:39:03.261534  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.322551933s)
	I0917 08:39:03.261613  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.227807299s)
	I0917 08:39:03.261653  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.811965691s)
	I0917 08:39:03.261677  397419 addons.go:475] Verifying addon registry=true in "addons-093168"
	I0917 08:39:03.261733  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.627156118s)
	I0917 08:39:03.261799  397419 addons.go:475] Verifying addon metrics-server=true in "addons-093168"
	I0917 08:39:03.263039  397419 out.go:177] * Verifying ingress addon...
	I0917 08:39:03.264106  397419 out.go:177] * Verifying registry addon...
	I0917 08:39:03.265798  397419 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 08:39:03.266577  397419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 08:39:03.338558  397419 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 08:39:03.338666  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:03.338842  397419 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 08:39:03.338910  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 08:39:03.344429  397419 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0917 08:39:03.835535  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:03.868020  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.931736693s)
	W0917 08:39:03.868122  397419 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 08:39:03.868142  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.117119927s)
	I0917 08:39:03.868181  397419 retry.go:31] will retry after 226.647603ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 08:39:03.868254  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.433802493s)
	I0917 08:39:03.869652  397419 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-093168 service yakd-dashboard -n yakd-dashboard
	
	I0917 08:39:03.934770  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:04.095668  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:39:04.269371  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:04.269859  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:04.350132  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:04.360728  397419 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 08:39:04.360808  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:39:04.384783  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:39:04.471408  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.23753895s)
	I0917 08:39:04.471460  397419 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-093168"
	I0917 08:39:04.473008  397419 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 08:39:04.475211  397419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 08:39:04.535330  397419 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 08:39:04.535353  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:04.598789  397419 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 08:39:04.615582  397419 addons.go:234] Setting addon gcp-auth=true in "addons-093168"
	I0917 08:39:04.615652  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:39:04.616089  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:39:04.633132  397419 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 08:39:04.633192  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:39:04.651065  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:39:04.769973  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:04.770233  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:05.035291  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:05.335175  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:05.336078  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:05.535256  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:05.769510  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:05.769763  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:05.979262  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:06.269556  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:06.269756  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:06.350348  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:06.479032  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:06.769819  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:06.770387  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:06.979151  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:06.991964  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.89623192s)
	I0917 08:39:06.992009  397419 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.358851016s)
	I0917 08:39:06.993965  397419 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 08:39:06.995369  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:39:06.996678  397419 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 08:39:06.996699  397419 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 08:39:07.050138  397419 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 08:39:07.050166  397419 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 08:39:07.070212  397419 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 08:39:07.070239  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 08:39:07.088585  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 08:39:07.269903  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:07.270150  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:07.478566  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:07.742409  397419 addons.go:475] Verifying addon gcp-auth=true in "addons-093168"
	I0917 08:39:07.743971  397419 out.go:177] * Verifying gcp-auth addon...
	I0917 08:39:07.746772  397419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 08:39:07.749628  397419 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 08:39:07.749648  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:07.850058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:07.850470  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:07.980638  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:08.250181  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:08.269219  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:08.269486  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:08.478757  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:08.750637  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:08.769245  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:08.769763  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:08.849706  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:08.978545  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:09.250459  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:09.269495  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:09.269663  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:09.479237  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:09.749689  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:09.769399  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:09.769720  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:09.978863  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:10.250410  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:10.269526  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:10.269619  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:10.478837  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:10.750940  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:10.769805  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:10.770515  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:10.979280  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:11.249995  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:11.269719  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:11.270190  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:11.350491  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:11.478320  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:11.750247  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:11.769390  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:11.769429  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:11.978986  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:12.250516  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:12.269587  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:12.269693  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:12.480184  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:12.750404  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:12.769444  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:12.769591  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:12.978948  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:13.250817  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:13.269637  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:13.270016  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:13.479104  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:13.749738  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:13.769523  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:13.769820  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:13.850119  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:13.978949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:14.249884  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:14.269638  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:14.270062  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:14.479204  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:14.749928  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:14.769438  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:14.769821  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:14.978839  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:15.250562  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:15.269409  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:15.269947  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:15.478860  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:15.750835  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:15.769345  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:15.770015  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:15.850276  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:15.979293  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:16.250064  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:16.269826  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:16.270274  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:16.478595  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:16.750278  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:16.769441  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:16.769627  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:16.978785  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:17.249585  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:17.269341  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:17.269848  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:17.479260  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:17.749952  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:17.769578  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:17.769936  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:17.979325  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:18.249779  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:18.269465  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:18.269775  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:18.350075  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:18.478976  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:18.750758  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:18.769496  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:18.769979  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:18.979120  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:19.249745  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:19.269362  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:19.269944  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:19.479390  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:19.749971  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:19.769917  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:19.770115  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:19.978384  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:20.250150  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:20.269613  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:20.270040  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:20.479591  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:20.750572  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:20.769329  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:20.769808  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:20.849500  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:20.978496  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:21.250173  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:21.269174  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:21.269534  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:21.478769  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:21.751128  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:21.769357  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:21.769371  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:21.978913  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:22.250688  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:22.269349  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:22.269695  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:22.478881  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:22.750753  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:22.769486  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:22.769809  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:22.849938  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:22.981047  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:23.249913  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:23.269440  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:23.269919  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:23.478892  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:23.750856  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:23.769354  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:23.769865  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:23.978955  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:24.249899  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:24.269545  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:24.269991  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:24.479144  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:24.750022  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:24.769833  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:24.770464  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:24.978298  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:25.250252  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:25.269224  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:25.269557  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:25.350289  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:25.479127  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:25.749639  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:25.769205  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:25.769585  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:25.979064  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:26.250038  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:26.269663  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:26.270152  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:26.478995  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:26.750285  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:26.769308  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:26.769370  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:26.978745  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:27.250676  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:27.269322  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:27.269652  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:27.478412  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:27.750691  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:27.769200  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:27.769604  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:27.849933  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:27.979206  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:28.249964  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:28.269520  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:28.269919  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:28.479193  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:28.749933  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:28.769877  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:28.770211  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:28.979141  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:29.249874  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:29.270072  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:29.270348  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:29.478073  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:29.749899  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:29.769818  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:29.770374  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:29.979288  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:30.250272  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:30.269500  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:30.269546  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:30.350342  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:30.479086  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:30.749787  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:30.769541  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:30.770013  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:30.979093  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:31.250841  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:31.269421  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:31.269882  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:31.479027  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:31.749892  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:31.769497  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:31.769834  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:31.979224  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:32.250379  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:32.269381  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:32.269400  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:32.479357  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:32.750376  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:32.769602  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:32.769757  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:32.850423  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:32.979114  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:33.251004  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:33.269908  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:33.270175  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:33.479600  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:33.749949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:33.769584  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:33.770008  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:33.979236  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:34.250012  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:34.269687  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:34.270180  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:34.479255  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:34.750023  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:34.769580  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:34.770002  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:34.978387  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:35.250069  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:35.269828  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:35.270241  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:35.349451  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:35.478206  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:35.749945  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:35.769452  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:35.769865  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:35.978859  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:36.250835  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:36.269592  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:36.269917  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:36.478473  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:36.750428  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:36.769595  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:36.769685  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:36.978362  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:37.250516  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:37.269304  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:37.269681  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:37.350217  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:37.479043  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:37.750460  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:37.769597  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:37.769948  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:37.978771  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:38.250668  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:38.269338  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:38.269667  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:38.478938  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:38.750692  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:38.769540  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:38.770044  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:38.979152  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:39.249775  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:39.269195  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:39.269607  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:39.478771  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:39.750626  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:39.769136  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:39.769575  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:39.850038  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:39.979047  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:40.249695  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:40.269441  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:40.269779  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:40.479084  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:40.749817  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:40.769332  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:40.769870  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:40.978708  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:41.250949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:41.269314  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:41.269830  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:41.480399  397419 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 08:39:41.480422  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:41.760397  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:41.837192  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:41.837670  397419 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 08:39:41.837689  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:41.849891  397419 node_ready.go:49] node "addons-093168" has status "Ready":"True"
	I0917 08:39:41.849914  397419 node_ready.go:38] duration metric: took 42.0034583s for node "addons-093168" to be "Ready" ...
	I0917 08:39:41.849924  397419 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 08:39:41.858669  397419 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7lhft" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:42.038738  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:42.251747  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:42.352912  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:42.353583  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:42.479530  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:42.750176  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:42.770265  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:42.770895  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:42.979804  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:43.251776  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:43.351669  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:43.352090  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:43.364736  397419 pod_ready.go:93] pod "coredns-7c65d6cfc9-7lhft" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.364757  397419 pod_ready.go:82] duration metric: took 1.50606765s for pod "coredns-7c65d6cfc9-7lhft" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.364777  397419 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.369471  397419 pod_ready.go:93] pod "etcd-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.369494  397419 pod_ready.go:82] duration metric: took 4.709608ms for pod "etcd-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.369508  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.373655  397419 pod_ready.go:93] pod "kube-apiserver-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.373672  397419 pod_ready.go:82] duration metric: took 4.156439ms for pod "kube-apiserver-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.373680  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.377527  397419 pod_ready.go:93] pod "kube-controller-manager-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.377561  397419 pod_ready.go:82] duration metric: took 3.873985ms for pod "kube-controller-manager-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.377572  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t77c5" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.450713  397419 pod_ready.go:93] pod "kube-proxy-t77c5" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.450741  397419 pod_ready.go:82] duration metric: took 73.161651ms for pod "kube-proxy-t77c5" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.450755  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.479047  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:43.750717  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:43.769660  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:43.769998  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:43.850947  397419 pod_ready.go:93] pod "kube-scheduler-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.850971  397419 pod_ready.go:82] duration metric: took 400.20789ms for pod "kube-scheduler-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.850982  397419 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.980093  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:44.250260  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:44.269521  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:44.270044  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:44.479161  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:44.750804  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:44.770420  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:44.770636  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.035777  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:45.250723  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:45.269748  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:45.270038  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.480689  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:45.750763  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:45.769885  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:45.770680  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.857292  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:45.980017  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:46.250727  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:46.269788  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:46.270046  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:46.539234  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:46.751501  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:46.835507  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:46.836067  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.036749  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:47.250892  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:47.336881  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:47.336877  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.536654  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:47.750566  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:47.770379  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:47.770654  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.857353  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:47.980545  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:48.251036  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:48.270119  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:48.270766  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:48.481111  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:48.751338  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:48.770188  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:48.771890  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:48.980058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:49.250249  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:49.270268  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:49.270358  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:49.480036  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:49.750762  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:49.770978  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:49.772174  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:49.857941  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:49.980041  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:50.250706  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:50.269862  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:50.270014  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:50.480731  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:50.751060  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:50.770120  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:50.770641  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.035548  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:51.250927  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:51.337208  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:51.337503  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.480679  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:51.750819  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:51.769976  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:51.770649  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.980192  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:52.250287  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:52.273280  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:52.353216  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:52.356559  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:52.479644  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:52.750695  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:52.769840  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:52.769992  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:52.980341  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:53.250812  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:53.269713  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:53.269993  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:53.479306  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:53.751203  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:53.769942  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:53.770231  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:53.982444  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:54.251381  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:54.270391  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:54.270907  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:54.357551  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:54.479329  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:54.750585  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:54.769800  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:54.770242  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:54.980330  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:55.250105  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:55.272058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:55.272343  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:55.480049  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:55.750228  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:55.769721  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:55.769811  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:55.979630  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:56.250644  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:56.270143  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:56.270801  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:56.361917  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:56.535770  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:56.750820  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:56.770318  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:56.834677  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:57.037436  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:57.251657  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:57.338559  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:57.340296  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:57.539728  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:57.750702  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:57.836323  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:57.836465  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.035687  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:58.250979  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:58.270445  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:58.270847  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.480099  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:58.750815  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:58.770260  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:58.770835  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.858855  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:58.980298  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:59.250242  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:59.271058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:59.271285  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:59.534742  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:59.749993  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:59.770735  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:59.770822  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:59.980421  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:00.250549  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:00.269795  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:00.270066  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:00.481133  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:00.750352  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:00.770060  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:00.770078  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:00.980516  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:01.250748  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:01.269906  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:01.270542  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:01.357167  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:01.479831  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:01.750735  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:01.851522  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:01.852196  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:01.980255  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:02.250668  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:02.270004  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:02.270239  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:02.480121  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:02.750937  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:02.770293  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:02.770548  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:02.980319  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:03.250471  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:03.269687  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:03.270015  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:03.358379  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:03.480308  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:03.750910  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:03.769915  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:03.770350  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:03.980888  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:04.250949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:04.334052  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:04.334547  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:04.536288  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:04.751331  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:04.769923  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:04.770074  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:04.979484  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:05.250753  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:05.269588  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:05.270367  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:05.479717  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:05.750044  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:05.770343  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:05.770697  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:05.857232  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:05.980252  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:06.250527  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:06.269894  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:06.270178  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:06.479711  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:06.750183  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:06.771071  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:06.771665  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:06.979659  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:07.251357  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:07.270510  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:07.270939  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:07.480189  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:07.750845  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:07.770209  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:07.771533  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:07.857980  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:07.983095  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:08.250342  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:08.270999  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:08.271094  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:08.479975  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:08.751137  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:08.770431  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:08.770712  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:08.980321  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:09.251024  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:09.270126  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:09.270735  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:09.480983  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:09.751277  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:09.769930  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:09.770147  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:09.980150  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:10.250493  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:10.269821  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:10.271102  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:10.356970  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:10.481755  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:10.749841  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:10.769711  397419 kapi.go:107] duration metric: took 1m7.503126792s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 08:40:10.770295  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:10.979832  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:11.250142  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:11.270431  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:11.480956  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:11.753496  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:11.770003  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:11.980475  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:12.250784  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:12.270813  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:12.357211  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:12.480873  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:12.751126  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:12.770604  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:12.979811  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:13.250139  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:13.270888  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:13.480241  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:13.750443  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:13.769994  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:13.979631  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:14.250829  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:14.270340  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:14.480298  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:14.750382  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:14.769880  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:14.857115  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:14.980593  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:15.250737  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:15.269909  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:15.480460  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:15.750879  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:15.770052  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:15.979744  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:16.251095  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:16.270338  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:16.480567  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:16.749687  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:16.770077  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.035489  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:17.250313  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:17.269943  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.356644  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:17.480054  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:17.750392  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:17.769702  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.980088  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:18.250474  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:18.269932  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:18.511698  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:18.750521  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:18.852675  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:18.979597  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:19.249859  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:19.270206  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:19.357692  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:19.480159  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:19.750104  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:19.771108  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:19.979504  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:20.251660  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:20.271175  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:20.480098  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:20.750670  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:20.770690  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:20.980839  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:21.250744  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:21.270685  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:21.357832  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:21.480348  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:21.750284  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:21.769821  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:21.981107  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:22.249898  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:22.270237  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:22.480433  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:22.750573  397419 kapi.go:107] duration metric: took 1m15.003789133s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 08:40:22.752532  397419 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-093168 cluster.
	I0917 08:40:22.753817  397419 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 08:40:22.755155  397419 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 08:40:22.769882  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:22.979715  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:23.270378  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:23.480884  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:23.770749  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:23.856903  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:23.979682  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:24.270418  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:24.481750  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:24.838546  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:24.979926  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:25.336387  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:25.536841  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:25.836400  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:25.857822  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:26.038227  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:26.270962  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:26.480310  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:26.769993  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:26.979717  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:27.270245  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:27.479626  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:27.770138  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:27.979728  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:28.270445  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:28.357521  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:28.479512  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:28.771302  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:28.980203  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:29.272777  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:29.479974  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:29.771290  397419 kapi.go:107] duration metric: took 1m26.505487302s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 08:40:30.036881  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:30.480783  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:30.856907  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:30.980652  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:31.480186  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:31.979880  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:32.481022  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:32.979408  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:33.357762  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:33.479779  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:33.979963  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:34.480525  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:34.980951  397419 kapi.go:107] duration metric: took 1m30.505737137s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 08:40:35.011214  397419 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, helm-tiller, nvidia-device-plugin, storage-provisioner, metrics-server, default-storageclass, inspektor-gadget, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0917 08:40:35.088827  397419 addons.go:510] duration metric: took 1m37.983731495s for enable addons: enabled=[cloud-spanner ingress-dns helm-tiller nvidia-device-plugin storage-provisioner metrics-server default-storageclass inspektor-gadget yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0917 08:40:35.963282  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:38.356952  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:40.357057  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:42.857137  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:45.357585  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:47.415219  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:49.856695  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:52.357369  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:54.856959  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:56.857573  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:59.356748  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:01.357311  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:03.857150  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:05.857298  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:08.356921  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:10.856637  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:12.857089  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:15.356886  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:17.357162  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:19.857088  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:21.857768  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:22.357225  397419 pod_ready.go:93] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"True"
	I0917 08:41:22.357248  397419 pod_ready.go:82] duration metric: took 1m38.50625923s for pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace to be "Ready" ...
	I0917 08:41:22.357261  397419 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fxm5v" in "kube-system" namespace to be "Ready" ...
	I0917 08:41:22.361581  397419 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fxm5v" in "kube-system" namespace has status "Ready":"True"
	I0917 08:41:22.361602  397419 pod_ready.go:82] duration metric: took 4.33393ms for pod "nvidia-device-plugin-daemonset-fxm5v" in "kube-system" namespace to be "Ready" ...
	I0917 08:41:22.361622  397419 pod_ready.go:39] duration metric: took 1m40.511686973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 08:41:22.361642  397419 api_server.go:52] waiting for apiserver process to appear ...
	I0917 08:41:22.361682  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 08:41:22.361731  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 08:41:22.396772  397419 cri.go:89] found id: "a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:22.396810  397419 cri.go:89] found id: ""
	I0917 08:41:22.396820  397419 logs.go:276] 1 containers: [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d]
	I0917 08:41:22.396885  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.401393  397419 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 08:41:22.401457  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 08:41:22.433869  397419 cri.go:89] found id: "498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:22.433890  397419 cri.go:89] found id: ""
	I0917 08:41:22.433898  397419 logs.go:276] 1 containers: [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126]
	I0917 08:41:22.433944  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.437332  397419 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 08:41:22.437407  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 08:41:22.472376  397419 cri.go:89] found id: "5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:22.472397  397419 cri.go:89] found id: ""
	I0917 08:41:22.472404  397419 logs.go:276] 1 containers: [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd]
	I0917 08:41:22.472448  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.475763  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 08:41:22.475824  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 08:41:22.509241  397419 cri.go:89] found id: "e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:22.509272  397419 cri.go:89] found id: ""
	I0917 08:41:22.509284  397419 logs.go:276] 1 containers: [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141]
	I0917 08:41:22.509335  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.512804  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 08:41:22.512865  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 08:41:22.546986  397419 cri.go:89] found id: "3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:22.547007  397419 cri.go:89] found id: ""
	I0917 08:41:22.547015  397419 logs.go:276] 1 containers: [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22]
	I0917 08:41:22.547060  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.550402  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 08:41:22.550459  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 08:41:22.584566  397419 cri.go:89] found id: "3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:22.584588  397419 cri.go:89] found id: ""
	I0917 08:41:22.584604  397419 logs.go:276] 1 containers: [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894]
	I0917 08:41:22.584655  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.588033  397419 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 08:41:22.588092  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 08:41:22.621636  397419 cri.go:89] found id: "c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:22.621662  397419 cri.go:89] found id: ""
	I0917 08:41:22.621672  397419 logs.go:276] 1 containers: [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7]
	I0917 08:41:22.621725  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.625177  397419 logs.go:123] Gathering logs for dmesg ...
	I0917 08:41:22.625207  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 08:41:22.651122  397419 logs.go:123] Gathering logs for describe nodes ...
	I0917 08:41:22.651158  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 08:41:22.750350  397419 logs.go:123] Gathering logs for kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] ...
	I0917 08:41:22.750382  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:22.794944  397419 logs.go:123] Gathering logs for etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] ...
	I0917 08:41:22.794981  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:22.847406  397419 logs.go:123] Gathering logs for kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] ...
	I0917 08:41:22.847443  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:22.882612  397419 logs.go:123] Gathering logs for kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] ...
	I0917 08:41:22.882647  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:22.938657  397419 logs.go:123] Gathering logs for container status ...
	I0917 08:41:22.938694  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 08:41:22.980301  397419 logs.go:123] Gathering logs for kubelet ...
	I0917 08:41:22.980332  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 08:41:23.057322  397419 logs.go:123] Gathering logs for coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] ...
	I0917 08:41:23.057359  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:23.092524  397419 logs.go:123] Gathering logs for kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] ...
	I0917 08:41:23.092557  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:23.129832  397419 logs.go:123] Gathering logs for kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] ...
	I0917 08:41:23.129871  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:23.165427  397419 logs.go:123] Gathering logs for CRI-O ...
	I0917 08:41:23.165458  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 08:41:25.744385  397419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 08:41:25.758404  397419 api_server.go:72] duration metric: took 2m28.653351209s to wait for apiserver process to appear ...
	I0917 08:41:25.758434  397419 api_server.go:88] waiting for apiserver healthz status ...
	I0917 08:41:25.758473  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 08:41:25.758517  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 08:41:25.791782  397419 cri.go:89] found id: "a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:25.791813  397419 cri.go:89] found id: ""
	I0917 08:41:25.791824  397419 logs.go:276] 1 containers: [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d]
	I0917 08:41:25.791876  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.795162  397419 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 08:41:25.795222  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 08:41:25.827605  397419 cri.go:89] found id: "498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:25.827632  397419 cri.go:89] found id: ""
	I0917 08:41:25.827642  397419 logs.go:276] 1 containers: [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126]
	I0917 08:41:25.827695  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.830956  397419 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 08:41:25.831016  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 08:41:25.864525  397419 cri.go:89] found id: "5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:25.864552  397419 cri.go:89] found id: ""
	I0917 08:41:25.864562  397419 logs.go:276] 1 containers: [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd]
	I0917 08:41:25.864628  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.867980  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 08:41:25.868042  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 08:41:25.901946  397419 cri.go:89] found id: "e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:25.901966  397419 cri.go:89] found id: ""
	I0917 08:41:25.901977  397419 logs.go:276] 1 containers: [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141]
	I0917 08:41:25.902026  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.905404  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 08:41:25.905458  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 08:41:25.938828  397419 cri.go:89] found id: "3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:25.938850  397419 cri.go:89] found id: ""
	I0917 08:41:25.938859  397419 logs.go:276] 1 containers: [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22]
	I0917 08:41:25.938905  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.942182  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 08:41:25.942243  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 08:41:25.975310  397419 cri.go:89] found id: "3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:25.975334  397419 cri.go:89] found id: ""
	I0917 08:41:25.975345  397419 logs.go:276] 1 containers: [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894]
	I0917 08:41:25.975405  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.978637  397419 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 08:41:25.978703  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 08:41:26.012169  397419 cri.go:89] found id: "c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:26.012190  397419 cri.go:89] found id: ""
	I0917 08:41:26.012200  397419 logs.go:276] 1 containers: [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7]
	I0917 08:41:26.012256  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:26.015540  397419 logs.go:123] Gathering logs for kubelet ...
	I0917 08:41:26.015562  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 08:41:26.093016  397419 logs.go:123] Gathering logs for kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] ...
	I0917 08:41:26.093054  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:26.136808  397419 logs.go:123] Gathering logs for etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] ...
	I0917 08:41:26.136847  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:26.188782  397419 logs.go:123] Gathering logs for kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] ...
	I0917 08:41:26.188814  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:26.226705  397419 logs.go:123] Gathering logs for kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] ...
	I0917 08:41:26.226736  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:26.259580  397419 logs.go:123] Gathering logs for CRI-O ...
	I0917 08:41:26.259609  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 08:41:26.335847  397419 logs.go:123] Gathering logs for container status ...
	I0917 08:41:26.335885  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 08:41:26.378206  397419 logs.go:123] Gathering logs for dmesg ...
	I0917 08:41:26.378237  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 08:41:26.404518  397419 logs.go:123] Gathering logs for describe nodes ...
	I0917 08:41:26.404550  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 08:41:26.508227  397419 logs.go:123] Gathering logs for coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] ...
	I0917 08:41:26.508263  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:26.543742  397419 logs.go:123] Gathering logs for kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] ...
	I0917 08:41:26.543777  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:26.600899  397419 logs.go:123] Gathering logs for kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] ...
	I0917 08:41:26.600938  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:29.138040  397419 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 08:41:29.142631  397419 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 08:41:29.143571  397419 api_server.go:141] control plane version: v1.31.1
	I0917 08:41:29.143606  397419 api_server.go:131] duration metric: took 3.385163598s to wait for apiserver health ...
	I0917 08:41:29.143621  397419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 08:41:29.143650  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 08:41:29.143699  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 08:41:29.178086  397419 cri.go:89] found id: "a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:29.178111  397419 cri.go:89] found id: ""
	I0917 08:41:29.178121  397419 logs.go:276] 1 containers: [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d]
	I0917 08:41:29.178180  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.181712  397419 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 08:41:29.181779  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 08:41:29.215733  397419 cri.go:89] found id: "498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:29.215755  397419 cri.go:89] found id: ""
	I0917 08:41:29.215763  397419 logs.go:276] 1 containers: [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126]
	I0917 08:41:29.215809  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.219058  397419 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 08:41:29.219111  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 08:41:29.252251  397419 cri.go:89] found id: "5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:29.252272  397419 cri.go:89] found id: ""
	I0917 08:41:29.252279  397419 logs.go:276] 1 containers: [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd]
	I0917 08:41:29.252321  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.255633  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 08:41:29.255688  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 08:41:29.289333  397419 cri.go:89] found id: "e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:29.289359  397419 cri.go:89] found id: ""
	I0917 08:41:29.289369  397419 logs.go:276] 1 containers: [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141]
	I0917 08:41:29.289423  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.292943  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 08:41:29.292996  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 08:41:29.326709  397419 cri.go:89] found id: "3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:29.326731  397419 cri.go:89] found id: ""
	I0917 08:41:29.326739  397419 logs.go:276] 1 containers: [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22]
	I0917 08:41:29.326799  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.330170  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 08:41:29.330226  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 08:41:29.363477  397419 cri.go:89] found id: "3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:29.363501  397419 cri.go:89] found id: ""
	I0917 08:41:29.363511  397419 logs.go:276] 1 containers: [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894]
	I0917 08:41:29.363567  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.366804  397419 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 08:41:29.366860  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 08:41:29.399852  397419 cri.go:89] found id: "c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:29.399872  397419 cri.go:89] found id: ""
	I0917 08:41:29.399881  397419 logs.go:276] 1 containers: [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7]
	I0917 08:41:29.399934  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.403233  397419 logs.go:123] Gathering logs for etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] ...
	I0917 08:41:29.403253  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:29.451453  397419 logs.go:123] Gathering logs for kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] ...
	I0917 08:41:29.451484  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:29.488951  397419 logs.go:123] Gathering logs for kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] ...
	I0917 08:41:29.488979  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:29.523572  397419 logs.go:123] Gathering logs for kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] ...
	I0917 08:41:29.523603  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:29.579709  397419 logs.go:123] Gathering logs for CRI-O ...
	I0917 08:41:29.579750  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 08:41:29.658415  397419 logs.go:123] Gathering logs for kubelet ...
	I0917 08:41:29.658455  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 08:41:29.735441  397419 logs.go:123] Gathering logs for dmesg ...
	I0917 08:41:29.735481  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 08:41:29.762124  397419 logs.go:123] Gathering logs for describe nodes ...
	I0917 08:41:29.762159  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 08:41:29.856247  397419 logs.go:123] Gathering logs for kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] ...
	I0917 08:41:29.856278  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:29.902365  397419 logs.go:123] Gathering logs for coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] ...
	I0917 08:41:29.902398  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:29.938050  397419 logs.go:123] Gathering logs for kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] ...
	I0917 08:41:29.938081  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:29.973223  397419 logs.go:123] Gathering logs for container status ...
	I0917 08:41:29.973251  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 08:41:32.526366  397419 system_pods.go:59] 19 kube-system pods found
	I0917 08:41:32.526399  397419 system_pods.go:61] "coredns-7c65d6cfc9-7lhft" [d955ab8f-33f3-4177-a7cf-29b7b9cc1102] Running
	I0917 08:41:32.526405  397419 system_pods.go:61] "csi-hostpath-attacher-0" [74cbb098-f189-44df-a4b9-3d4644fad690] Running
	I0917 08:41:32.526409  397419 system_pods.go:61] "csi-hostpath-resizer-0" [2d53c081-d93a-46a4-8b7b-29e15b9b485e] Running
	I0917 08:41:32.526413  397419 system_pods.go:61] "csi-hostpathplugin-lknd7" [3267ecfa-6ae5-4291-9944-574c0476e9ec] Running
	I0917 08:41:32.526416  397419 system_pods.go:61] "etcd-addons-093168" [a017480c-3ca0-477f-801b-630887a3efdd] Running
	I0917 08:41:32.526420  397419 system_pods.go:61] "kindnet-nvhtv" [2a27ef1d-01b4-4db6-9b83-51a2b2889bc2] Running
	I0917 08:41:32.526422  397419 system_pods.go:61] "kube-apiserver-addons-093168" [1b03826d-3f50-4a0c-a2ad-f8d354f0935a] Running
	I0917 08:41:32.526425  397419 system_pods.go:61] "kube-controller-manager-addons-093168" [2da0a6e2-49be-44c3-a463-463a9865310f] Running
	I0917 08:41:32.526428  397419 system_pods.go:61] "kube-ingress-dns-minikube" [236b5470-912c-4665-ae2a-0aeda61e0892] Running
	I0917 08:41:32.526432  397419 system_pods.go:61] "kube-proxy-t77c5" [76518769-e724-461e-8134-d120144d60a8] Running
	I0917 08:41:32.526436  397419 system_pods.go:61] "kube-scheduler-addons-093168" [8dbe178e-95a4-491e-a059-423f6b78f417] Running
	I0917 08:41:32.526441  397419 system_pods.go:61] "metrics-server-84c5f94fbc-bmr95" [48e9bb6a-e161-4bfe-a8e4-14f5b970e50c] Running
	I0917 08:41:32.526445  397419 system_pods.go:61] "nvidia-device-plugin-daemonset-fxm5v" [d00acbad-2301-4783-835a-f6133e77a22b] Running
	I0917 08:41:32.526450  397419 system_pods.go:61] "registry-66c9cd494c-8h9wm" [efc2db30-2af8-4cf7-a316-5dac4df4a136] Running
	I0917 08:41:32.526455  397419 system_pods.go:61] "registry-proxy-9plz8" [8bc41646-54c5-4d13-8d5f-bebcdc6f15ce] Running
	I0917 08:41:32.526461  397419 system_pods.go:61] "snapshot-controller-56fcc65765-md5h6" [ff141ee6-2569-49b0-8b1a-83d9a1a05178] Running
	I0917 08:41:32.526470  397419 system_pods.go:61] "snapshot-controller-56fcc65765-xdr22" [69737144-ad79-4db9-ae9c-e5575f580f48] Running
	I0917 08:41:32.526475  397419 system_pods.go:61] "storage-provisioner" [e20caa93-3db5-4d96-b8a8-7665d4f5437d] Running
	I0917 08:41:32.526483  397419 system_pods.go:61] "tiller-deploy-b48cc5f79-p6zds" [48ba15f8-54f5-410f-8c46-b15665532417] Running
	I0917 08:41:32.526493  397419 system_pods.go:74] duration metric: took 3.382863956s to wait for pod list to return data ...
	I0917 08:41:32.526503  397419 default_sa.go:34] waiting for default service account to be created ...
	I0917 08:41:32.529073  397419 default_sa.go:45] found service account: "default"
	I0917 08:41:32.529100  397419 default_sa.go:55] duration metric: took 2.584342ms for default service account to be created ...
	I0917 08:41:32.529110  397419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 08:41:32.539148  397419 system_pods.go:86] 19 kube-system pods found
	I0917 08:41:32.539179  397419 system_pods.go:89] "coredns-7c65d6cfc9-7lhft" [d955ab8f-33f3-4177-a7cf-29b7b9cc1102] Running
	I0917 08:41:32.539185  397419 system_pods.go:89] "csi-hostpath-attacher-0" [74cbb098-f189-44df-a4b9-3d4644fad690] Running
	I0917 08:41:32.539189  397419 system_pods.go:89] "csi-hostpath-resizer-0" [2d53c081-d93a-46a4-8b7b-29e15b9b485e] Running
	I0917 08:41:32.539193  397419 system_pods.go:89] "csi-hostpathplugin-lknd7" [3267ecfa-6ae5-4291-9944-574c0476e9ec] Running
	I0917 08:41:32.539196  397419 system_pods.go:89] "etcd-addons-093168" [a017480c-3ca0-477f-801b-630887a3efdd] Running
	I0917 08:41:32.539200  397419 system_pods.go:89] "kindnet-nvhtv" [2a27ef1d-01b4-4db6-9b83-51a2b2889bc2] Running
	I0917 08:41:32.539203  397419 system_pods.go:89] "kube-apiserver-addons-093168" [1b03826d-3f50-4a0c-a2ad-f8d354f0935a] Running
	I0917 08:41:32.539207  397419 system_pods.go:89] "kube-controller-manager-addons-093168" [2da0a6e2-49be-44c3-a463-463a9865310f] Running
	I0917 08:41:32.539210  397419 system_pods.go:89] "kube-ingress-dns-minikube" [236b5470-912c-4665-ae2a-0aeda61e0892] Running
	I0917 08:41:32.539213  397419 system_pods.go:89] "kube-proxy-t77c5" [76518769-e724-461e-8134-d120144d60a8] Running
	I0917 08:41:32.539216  397419 system_pods.go:89] "kube-scheduler-addons-093168" [8dbe178e-95a4-491e-a059-423f6b78f417] Running
	I0917 08:41:32.539220  397419 system_pods.go:89] "metrics-server-84c5f94fbc-bmr95" [48e9bb6a-e161-4bfe-a8e4-14f5b970e50c] Running
	I0917 08:41:32.539223  397419 system_pods.go:89] "nvidia-device-plugin-daemonset-fxm5v" [d00acbad-2301-4783-835a-f6133e77a22b] Running
	I0917 08:41:32.539227  397419 system_pods.go:89] "registry-66c9cd494c-8h9wm" [efc2db30-2af8-4cf7-a316-5dac4df4a136] Running
	I0917 08:41:32.539230  397419 system_pods.go:89] "registry-proxy-9plz8" [8bc41646-54c5-4d13-8d5f-bebcdc6f15ce] Running
	I0917 08:41:32.539235  397419 system_pods.go:89] "snapshot-controller-56fcc65765-md5h6" [ff141ee6-2569-49b0-8b1a-83d9a1a05178] Running
	I0917 08:41:32.539242  397419 system_pods.go:89] "snapshot-controller-56fcc65765-xdr22" [69737144-ad79-4db9-ae9c-e5575f580f48] Running
	I0917 08:41:32.539245  397419 system_pods.go:89] "storage-provisioner" [e20caa93-3db5-4d96-b8a8-7665d4f5437d] Running
	I0917 08:41:32.539248  397419 system_pods.go:89] "tiller-deploy-b48cc5f79-p6zds" [48ba15f8-54f5-410f-8c46-b15665532417] Running
	I0917 08:41:32.539255  397419 system_pods.go:126] duration metric: took 10.139894ms to wait for k8s-apps to be running ...
	I0917 08:41:32.539265  397419 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 08:41:32.539310  397419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 08:41:32.550663  397419 system_svc.go:56] duration metric: took 11.387952ms WaitForService to wait for kubelet
	I0917 08:41:32.550703  397419 kubeadm.go:582] duration metric: took 2m35.445654974s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 08:41:32.550732  397419 node_conditions.go:102] verifying NodePressure condition ...
	I0917 08:41:32.553809  397419 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 08:41:32.553834  397419 node_conditions.go:123] node cpu capacity is 8
	I0917 08:41:32.553851  397419 node_conditions.go:105] duration metric: took 3.112867ms to run NodePressure ...
	I0917 08:41:32.553869  397419 start.go:241] waiting for startup goroutines ...
	I0917 08:41:32.553875  397419 start.go:246] waiting for cluster config update ...
	I0917 08:41:32.553893  397419 start.go:255] writing updated cluster config ...
	I0917 08:41:32.554149  397419 ssh_runner.go:195] Run: rm -f paused
	I0917 08:41:32.604339  397419 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 08:41:32.606540  397419 out.go:177] * Done! kubectl is now configured to use "addons-093168" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 08:57:16 addons-093168 crio[1031]: time="2024-09-17 08:57:16.869127093Z" level=info msg="Pulling image: docker.io/nginx:latest" id=baa29988-74ad-4502-811c-10c457ecda24 name=/runtime.v1.ImageService/PullImage
	Sep 17 08:57:16 addons-093168 crio[1031]: time="2024-09-17 08:57:16.873515235Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 17 08:57:17 addons-093168 crio[1031]: time="2024-09-17 08:57:17.935494486Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=353f5c7d-7bcb-4186-a913-210f9363cc4b name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:17 addons-093168 crio[1031]: time="2024-09-17 08:57:17.935810602Z" level=info msg="Image docker.io/nginx:alpine not found" id=353f5c7d-7bcb-4186-a913-210f9363cc4b name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:24 addons-093168 crio[1031]: time="2024-09-17 08:57:24.936355836Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0b33e400-1858-4cd2-90db-ec514db6046f name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:24 addons-093168 crio[1031]: time="2024-09-17 08:57:24.936677063Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0b33e400-1858-4cd2-90db-ec514db6046f name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:28 addons-093168 crio[1031]: time="2024-09-17 08:57:28.935432584Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=eab56064-5e08-47bd-aafa-b62d0c974aab name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:28 addons-093168 crio[1031]: time="2024-09-17 08:57:28.935659443Z" level=info msg="Image docker.io/nginx:alpine not found" id=eab56064-5e08-47bd-aafa-b62d0c974aab name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:29 addons-093168 crio[1031]: time="2024-09-17 08:57:29.935661626Z" level=info msg="Checking image status: busybox:stable" id=d9698ae7-d5c4-45cd-9fe1-5c2b729826bd name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:29 addons-093168 crio[1031]: time="2024-09-17 08:57:29.935879038Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Sep 17 08:57:29 addons-093168 crio[1031]: time="2024-09-17 08:57:29.936064892Z" level=info msg="Image busybox:stable not found" id=d9698ae7-d5c4-45cd-9fe1-5c2b729826bd name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:35 addons-093168 crio[1031]: time="2024-09-17 08:57:35.936221142Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2456a520-c986-455f-864e-9489eee6ad08 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:35 addons-093168 crio[1031]: time="2024-09-17 08:57:35.936480365Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2456a520-c986-455f-864e-9489eee6ad08 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:40 addons-093168 crio[1031]: time="2024-09-17 08:57:40.936168822Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=242cef4f-1de1-49b8-b76d-a61e79abd56c name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:40 addons-093168 crio[1031]: time="2024-09-17 08:57:40.936218086Z" level=info msg="Checking image status: busybox:stable" id=237afb18-f864-475c-9720-e9fa3ed35d35 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:40 addons-093168 crio[1031]: time="2024-09-17 08:57:40.936377789Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Sep 17 08:57:40 addons-093168 crio[1031]: time="2024-09-17 08:57:40.936447613Z" level=info msg="Image docker.io/nginx:alpine not found" id=242cef4f-1de1-49b8-b76d-a61e79abd56c name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:40 addons-093168 crio[1031]: time="2024-09-17 08:57:40.936518223Z" level=info msg="Image busybox:stable not found" id=237afb18-f864-475c-9720-e9fa3ed35d35 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:47 addons-093168 crio[1031]: time="2024-09-17 08:57:47.628357730Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=f8c3611d-06c0-4108-a9a6-ed8b6db1351b name=/runtime.v1.ImageService/PullImage
	Sep 17 08:57:47 addons-093168 crio[1031]: time="2024-09-17 08:57:47.629718008Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 17 08:57:49 addons-093168 crio[1031]: time="2024-09-17 08:57:49.936447065Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9cfe1f17-58f3-4f8d-b29c-66e9c501ffd9 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:49 addons-093168 crio[1031]: time="2024-09-17 08:57:49.936657922Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9cfe1f17-58f3-4f8d-b29c-66e9c501ffd9 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:53 addons-093168 crio[1031]: time="2024-09-17 08:57:53.935568552Z" level=info msg="Checking image status: busybox:stable" id=68349d56-28b0-42f2-bbad-4491a9dc0aff name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:57:53 addons-093168 crio[1031]: time="2024-09-17 08:57:53.935756150Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Sep 17 08:57:53 addons-093168 crio[1031]: time="2024-09-17 08:57:53.935892118Z" level=info msg="Image busybox:stable not found" id=68349d56-28b0-42f2-bbad-4491a9dc0aff name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	0906bd347c6d5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          17 minutes ago      Running             csi-snapshotter                          0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	f64b5aebbe7dd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          17 minutes ago      Running             csi-provisioner                          0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	eba5434cab6ab       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            17 minutes ago      Running             liveness-probe                           0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	057ac2c02266d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           17 minutes ago      Running             hostpath                                 0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	a0cca87be1a6f       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             17 minutes ago      Running             controller                               0                   2ba51e0898663       ingress-nginx-controller-bc57996ff-vgw4z
	db9ecacd5aed6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                17 minutes ago      Running             node-driver-registrar                    0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	843e30f0a0cf8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 17 minutes ago      Running             gcp-auth                                 0                   2e75c3dc5c24b       gcp-auth-89d5ffd79-xhlm6
	a53dfdb3b91a2       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              17 minutes ago      Running             csi-resizer                              0                   e4b2df5e4c60c       csi-hostpath-resizer-0
	a31591d3a75de       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             17 minutes ago      Running             local-path-provisioner                   0                   655c3c112fdda       local-path-provisioner-86d989889c-qkqjp
	221d8f80ce839       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             17 minutes ago      Running             csi-attacher                             0                   47552b94b1444       csi-hostpath-attacher-0
	12e5d8714fa59       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   17 minutes ago      Exited              patch                                    0                   4d5a9d109a211       ingress-nginx-admission-patch-pzmkp
	f921ee5175ec0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   17 minutes ago      Running             csi-external-health-monitor-controller   0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	a54dcb4e0840a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   17 minutes ago      Exited              create                                   0                   fc238c2462bf5       ingress-nginx-admission-create-4qdns
	b1aa0b4e6a00c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      18 minutes ago      Running             volume-snapshot-controller               0                   47f5d8b226a2a       snapshot-controller-56fcc65765-xdr22
	85332a0e5866e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      18 minutes ago      Running             volume-snapshot-controller               0                   f61171da5bfb1       snapshot-controller-56fcc65765-md5h6
	3300f395d8567       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             18 minutes ago      Running             minikube-ingress-dns                     0                   f7a1428432f34       kube-ingress-dns-minikube
	5eddba40afd11       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             18 minutes ago      Running             coredns                                  0                   ebe1938207849       coredns-7c65d6cfc9-7lhft
	6d7dbaef7a5cd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             18 minutes ago      Running             storage-provisioner                      0                   c9466fe8d518b       storage-provisioner
	3a8b894037793       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             18 minutes ago      Running             kube-proxy                               0                   eb334b9a5799a       kube-proxy-t77c5
	c9fa6b2ef5f0b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                                             18 minutes ago      Running             kindnet-cni                              0                   2e76c07fa96a5       kindnet-nvhtv
	e817293c644c7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             19 minutes ago      Running             kube-scheduler                           0                   a4765fe76b73a       kube-scheduler-addons-093168
	3521aa957963e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             19 minutes ago      Running             kube-controller-manager                  0                   2608552715e00       kube-controller-manager-addons-093168
	498509ee96967       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             19 minutes ago      Running             etcd                                     0                   62ce9ab109c53       etcd-addons-093168
	a2e61e738c0da       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             19 minutes ago      Running             kube-apiserver                           0                   bceb5d8367d07       kube-apiserver-addons-093168
	
	
	==> coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] <==
	[INFO] 10.244.0.11:33082 - 25853 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001192s
	[INFO] 10.244.0.11:37329 - 15527 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075609s
	[INFO] 10.244.0.11:37329 - 17316 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000121561s
	[INFO] 10.244.0.11:60250 - 35649 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005099659s
	[INFO] 10.244.0.11:60250 - 60739 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.006207516s
	[INFO] 10.244.0.11:37419 - 41998 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006428119s
	[INFO] 10.244.0.11:37419 - 39435 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006489964s
	[INFO] 10.244.0.11:56965 - 22146 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005110836s
	[INFO] 10.244.0.11:56965 - 41870 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005774722s
	[INFO] 10.244.0.11:40932 - 6018 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000055144s
	[INFO] 10.244.0.11:40932 - 2693 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093554s
	[INFO] 10.244.0.20:60603 - 21372 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000239521s
	[INFO] 10.244.0.20:56296 - 33744 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000369472s
	[INFO] 10.244.0.20:40076 - 30284 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123756s
	[INFO] 10.244.0.20:49639 - 52270 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158323s
	[INFO] 10.244.0.20:40994 - 1923 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099192s
	[INFO] 10.244.0.20:37435 - 32231 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000168193s
	[INFO] 10.244.0.20:36201 - 45290 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008885924s
	[INFO] 10.244.0.20:59898 - 55008 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008870022s
	[INFO] 10.244.0.20:43991 - 39302 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007846244s
	[INFO] 10.244.0.20:58304 - 34077 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008338334s
	[INFO] 10.244.0.20:34428 - 29339 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006763856s
	[INFO] 10.244.0.20:47732 - 9825 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007153268s
	[INFO] 10.244.0.20:52184 - 47443 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000802704s
	[INFO] 10.244.0.20:41521 - 18294 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000879797s
	
	
	==> describe nodes <==
	Name:               addons-093168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-093168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=addons-093168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T08_38_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-093168
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-093168"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 08:38:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-093168
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 08:57:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 08:55:32 +0000   Tue, 17 Sep 2024 08:38:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 08:55:32 +0000   Tue, 17 Sep 2024 08:38:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 08:55:32 +0000   Tue, 17 Sep 2024 08:38:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 08:55:32 +0000   Tue, 17 Sep 2024 08:39:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-093168
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fdb73868874fa2aa4322a27fc496be
	  System UUID:                7036efa9-bcf4-469e-8312-994f69eacc62
	  Boot ID:                    8c59a26b-5d0c-4753-9e88-ef03399e569b
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod-restore                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m31s
	  default                     test-local-path                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	  gcp-auth                    gcp-auth-89d5ffd79-xhlm6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vgw4z    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         18m
	  kube-system                 coredns-7c65d6cfc9-7lhft                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 csi-hostpathplugin-lknd7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-addons-093168                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-nvhtv                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-addons-093168                250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-addons-093168       200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-t77c5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-addons-093168                100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 snapshot-controller-56fcc65765-md5h6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 snapshot-controller-56fcc65765-xdr22        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  local-path-storage          local-path-provisioner-86d989889c-qkqjp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 18m   kube-proxy       
	  Normal   Starting                 19m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  19m   kubelet          Node addons-093168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m   kubelet          Node addons-093168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m   kubelet          Node addons-093168 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           19m   node-controller  Node addons-093168 event: Registered Node addons-093168 in Controller
	  Normal   NodeReady                18m   kubelet          Node addons-093168 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba ff 74 a1 5e 3b 08 06
	[ +13.302976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 08 54 46 b8 ba 08 06
	[  +0.000352] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ba ff 74 a1 5e 3b 08 06
	[Sep17 08:24] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a 24 b9 ac 9a ab 08 06
	[  +0.000405] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a b6 29 69 41 ca 08 06
	[ +18.455196] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 92 00 b0 ac cb 10 08 06
	[  +0.102770] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 8d 84 a2 25 2e 08 06
	[ +10.887970] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev cni0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[  +0.094820] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[Sep17 08:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 14 a2 f8 f7 06 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[ +21.407596] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 7a 9f 11 c8 01 08 06
	[  +0.000366] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 22 8d 84 a2 25 2e 08 06
	
	
	==> etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] <==
	{"level":"info","ts":"2024-09-17T08:39:00.944484Z","caller":"traceutil/trace.go:171","msg":"trace[1814892135] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:389; }","duration":"195.704417ms","start":"2024-09-17T08:39:00.748773Z","end":"2024-09-17T08:39:00.944477Z","steps":["trace[1814892135] 'agreement among raft nodes before linearized reading'  (duration: 193.700916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:00.942519Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.799596ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-17T08:39:00.944656Z","caller":"traceutil/trace.go:171","msg":"trace[1494037761] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:389; }","duration":"195.932813ms","start":"2024-09-17T08:39:00.748716Z","end":"2024-09-17T08:39:00.944649Z","steps":["trace[1494037761] 'agreement among raft nodes before linearized reading'  (duration: 193.78917ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.236883Z","caller":"traceutil/trace.go:171","msg":"trace[1393862041] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"189.03103ms","start":"2024-09-17T08:39:01.047836Z","end":"2024-09-17T08:39:01.236868Z","steps":["trace[1393862041] 'process raft request'  (duration: 84.371141ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246393Z","caller":"traceutil/trace.go:171","msg":"trace[350871136] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"192.090658ms","start":"2024-09-17T08:39:01.054286Z","end":"2024-09-17T08:39:01.246377Z","steps":["trace[350871136] 'process raft request'  (duration: 192.056665ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246556Z","caller":"traceutil/trace.go:171","msg":"trace[288716589] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"192.561769ms","start":"2024-09-17T08:39:01.053978Z","end":"2024-09-17T08:39:01.246540Z","steps":["trace[288716589] 'process raft request'  (duration: 192.289701ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246589Z","caller":"traceutil/trace.go:171","msg":"trace[842047613] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"194.309372ms","start":"2024-09-17T08:39:01.052273Z","end":"2024-09-17T08:39:01.246583Z","steps":["trace[842047613] 'process raft request'  (duration: 193.860025ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246756Z","caller":"traceutil/trace.go:171","msg":"trace[874038599] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"192.611349ms","start":"2024-09-17T08:39:01.054136Z","end":"2024-09-17T08:39:01.246747Z","steps":["trace[874038599] 'process raft request'  (duration: 192.166716ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246789Z","caller":"traceutil/trace.go:171","msg":"trace[832402900] linearizableReadLoop","detail":"{readStateIndex:412; appliedIndex:412; }","duration":"107.196849ms","start":"2024-09-17T08:39:01.139584Z","end":"2024-09-17T08:39:01.246781Z","steps":["trace[832402900] 'read index received'  (duration: 107.193495ms)","trace[832402900] 'applied index is now lower than readState.Index'  (duration: 2.936µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T08:39:01.246842Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.242882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:01.247903Z","caller":"traceutil/trace.go:171","msg":"trace[1595279853] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:401; }","duration":"108.044342ms","start":"2024-09-17T08:39:01.139580Z","end":"2024-09-17T08:39:01.247624Z","steps":["trace[1595279853] 'agreement among raft nodes before linearized reading'  (duration: 107.221566ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:01.249317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.530022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:01.250846Z","caller":"traceutil/trace.go:171","msg":"trace[1335273238] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:407; }","duration":"111.069492ms","start":"2024-09-17T08:39:01.139765Z","end":"2024-09-17T08:39:01.250834Z","steps":["trace[1335273238] 'agreement among raft nodes before linearized reading'  (duration: 109.456626ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.249635Z","caller":"traceutil/trace.go:171","msg":"trace[134367931] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"109.798885ms","start":"2024-09-17T08:39:01.139825Z","end":"2024-09-17T08:39:01.249624Z","steps":["trace[134367931] 'process raft request'  (duration: 109.176303ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:01.250797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.892038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:01.251932Z","caller":"traceutil/trace.go:171","msg":"trace[1048075780] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:407; }","duration":"112.027319ms","start":"2024-09-17T08:39:01.139891Z","end":"2024-09-17T08:39:01.251919Z","steps":["trace[1048075780] 'agreement among raft nodes before linearized reading'  (duration: 110.877975ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:40:35.768719Z","caller":"traceutil/trace.go:171","msg":"trace[33144781] transaction","detail":"{read_only:false; response_revision:1201; number_of_response:1; }","duration":"100.543757ms","start":"2024-09-17T08:40:35.668147Z","end":"2024-09-17T08:40:35.768691Z","steps":["trace[33144781] 'process raft request'  (duration: 100.303667ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:40:35.958931Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.840736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-bmr95\" ","response":"range_response_count:1 size:4865"}
	{"level":"info","ts":"2024-09-17T08:40:35.958981Z","caller":"traceutil/trace.go:171","msg":"trace[13582332] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-bmr95; range_end:; response_count:1; response_revision:1201; }","duration":"105.907905ms","start":"2024-09-17T08:40:35.853062Z","end":"2024-09-17T08:40:35.958970Z","steps":["trace[13582332] 'range keys from in-memory index tree'  (duration: 105.71294ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:48:48.277449Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1537}
	{"level":"info","ts":"2024-09-17T08:48:48.301907Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1537,"took":"23.976999ms","hash":2118524458,"current-db-size-bytes":6434816,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3305472,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-17T08:48:48.301956Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2118524458,"revision":1537,"compact-revision":-1}
	{"level":"info","ts":"2024-09-17T08:53:48.282008Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1957}
	{"level":"info","ts":"2024-09-17T08:53:48.297895Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1957,"took":"15.390014ms","hash":1935728090,"current-db-size-bytes":6434816,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3989504,"current-db-size-in-use":"4.0 MB"}
	{"level":"info","ts":"2024-09-17T08:53:48.297952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1935728090,"revision":1957,"compact-revision":1537}
	
	
	==> gcp-auth [843e30f0a0cf860efc230a2a87deca3cc75d4f6408e31a84a0dd5b01df4dc08d] <==
	2024/09/17 08:41:32 Ready to write response ...
	2024/09/17 08:41:32 Ready to marshal response ...
	2024/09/17 08:41:32 Ready to write response ...
	2024/09/17 08:49:36 Ready to marshal response ...
	2024/09/17 08:49:36 Ready to write response ...
	2024/09/17 08:49:36 Ready to marshal response ...
	2024/09/17 08:49:36 Ready to write response ...
	2024/09/17 08:49:36 Ready to marshal response ...
	2024/09/17 08:49:36 Ready to write response ...
	2024/09/17 08:49:45 Ready to marshal response ...
	2024/09/17 08:49:45 Ready to write response ...
	2024/09/17 08:49:46 Ready to marshal response ...
	2024/09/17 08:49:46 Ready to write response ...
	2024/09/17 08:49:51 Ready to marshal response ...
	2024/09/17 08:49:51 Ready to write response ...
	2024/09/17 08:49:52 Ready to marshal response ...
	2024/09/17 08:49:52 Ready to write response ...
	2024/09/17 08:49:52 Ready to marshal response ...
	2024/09/17 08:49:52 Ready to write response ...
	2024/09/17 08:49:53 Ready to marshal response ...
	2024/09/17 08:49:53 Ready to write response ...
	2024/09/17 08:49:54 Ready to marshal response ...
	2024/09/17 08:49:54 Ready to write response ...
	2024/09/17 08:50:25 Ready to marshal response ...
	2024/09/17 08:50:25 Ready to write response ...
	
	
	==> kernel <==
	 08:57:56 up  2:40,  0 users,  load average: 0.00, 0.10, 0.45
	Linux addons-093168 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] <==
	I0917 08:55:51.153887       1 main.go:299] handling current node
	I0917 08:56:01.148935       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:56:01.148985       1 main.go:299] handling current node
	I0917 08:56:11.149025       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:56:11.149058       1 main.go:299] handling current node
	I0917 08:56:21.153988       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:56:21.154028       1 main.go:299] handling current node
	I0917 08:56:31.152025       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:56:31.152061       1 main.go:299] handling current node
	I0917 08:56:41.149490       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:56:41.149527       1 main.go:299] handling current node
	I0917 08:56:51.157606       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:56:51.157639       1 main.go:299] handling current node
	I0917 08:57:01.149679       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:57:01.149711       1 main.go:299] handling current node
	I0917 08:57:11.150506       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:57:11.150543       1 main.go:299] handling current node
	I0917 08:57:21.154106       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:57:21.154146       1 main.go:299] handling current node
	I0917 08:57:31.148674       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:57:31.148710       1 main.go:299] handling current node
	I0917 08:57:41.149642       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:57:41.149695       1 main.go:299] handling current node
	I0917 08:57:51.156017       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:57:51.156050       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] <==
	W0917 08:41:23.031606       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 08:41:23.031645       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0917 08:41:23.031691       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 08:41:23.032764       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 08:41:23.032787       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0917 08:41:27.038506       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.221.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.221.184:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W0917 08:41:27.038723       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 08:41:27.039088       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 08:41:27.049456       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0917 08:49:36.125694       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.199.141"}
	E0917 08:49:48.897202       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40012: use of closed network connection
	E0917 08:49:48.922992       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.25:41648: read: connection reset by peer
	E0917 08:49:53.964352       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0917 08:49:54.758375       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0917 08:49:54.934461       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.248.144"}
	I0917 08:50:05.538716       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0917 08:53:08.601116       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0917 08:53:09.617791       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0917 08:56:28.094886       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] <==
	I0917 08:53:18.716391       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0917 08:53:19.260347       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:53:19.260394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:53:26.564141       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0917 08:53:26.564179       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 08:53:26.970602       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0917 08:53:26.970647       1 shared_informer.go:320] Caches are synced for garbage collector
	W0917 08:53:31.159305       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:53:31.159360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:53:52.995298       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:53:52.995348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:54:39.470027       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:54:39.470076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:55:18.328582       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:55:18.328633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:55:32.767462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-093168"
	I0917 08:55:35.162554       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="7.818µs"
	W0917 08:55:52.138508       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:55:52.138568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:56:34.211807       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:56:34.211876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:57:19.887436       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:57:19.887483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:57:55.541299       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:57:55.541356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] <==
	I0917 08:39:00.642627       1 server_linux.go:66] "Using iptables proxy"
	I0917 08:39:01.648049       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 08:39:01.648220       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 08:39:02.034353       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 08:39:02.034507       1 server_linux.go:169] "Using iptables Proxier"
	I0917 08:39:02.043649       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 08:39:02.044366       1 server.go:483] "Version info" version="v1.31.1"
	I0917 08:39:02.044467       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 08:39:02.047306       1 config.go:199] "Starting service config controller"
	I0917 08:39:02.047353       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 08:39:02.047414       1 config.go:105] "Starting endpoint slice config controller"
	I0917 08:39:02.047425       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 08:39:02.048125       1 config.go:328] "Starting node config controller"
	I0917 08:39:02.048199       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 08:39:02.148044       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 08:39:02.148173       1 shared_informer.go:320] Caches are synced for service config
	I0917 08:39:02.150486       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] <==
	W0917 08:38:49.536513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0917 08:38:49.536752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0917 08:38:49.536844       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 08:38:49.536913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 08:38:49.537008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:49.536852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0917 08:38:49.537056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0917 08:38:49.536771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 08:38:49.537088       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536586       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 08:38:49.537126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 08:38:49.537153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.537194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0917 08:38:49.537194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 08:38:49.537213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0917 08:38:49.537222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:50.443859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 08:38:50.443910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:50.468561       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 08:38:50.468614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 08:38:50.759161       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 08:57:12 addons-093168 kubelet[1648]: E0917 08:57:12.937090    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b6005bc-d2b8-4f48-bcf7-9878b2bf05d1"
	Sep 17 08:57:16 addons-093168 kubelet[1648]: E0917 08:57:16.868345    1648 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Sep 17 08:57:16 addons-093168 kubelet[1648]: E0917 08:57:16.868415    1648 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Sep 17 08:57:16 addons-093168 kubelet[1648]: E0917 08:57:16.868656    1648 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:busybox,Image:busybox:stable,Command:[sh -c echo 'local-path-provisioner' > /test/file1],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/test,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9njfw,ReadOnly:true,MountP
ath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-local-path_default(e7497496-c2fe-46d3-98d2-378a076580ac): ErrImagePull: loading manifest for target platform: reading manifest sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logge
r="UnhandledError"
	Sep 17 08:57:16 addons-093168 kubelet[1648]: E0917 08:57:16.870332    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="e7497496-c2fe-46d3-98d2-378a076580ac"
	Sep 17 08:57:17 addons-093168 kubelet[1648]: E0917 08:57:17.936061    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="310f797d-f8e1-4d73-abe1-05f4dc832ecc"
	Sep 17 08:57:22 addons-093168 kubelet[1648]: E0917 08:57:22.292782    1648 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563442292543307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:57:22 addons-093168 kubelet[1648]: E0917 08:57:22.292824    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563442292543307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:57:24 addons-093168 kubelet[1648]: E0917 08:57:24.936961    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b6005bc-d2b8-4f48-bcf7-9878b2bf05d1"
	Sep 17 08:57:28 addons-093168 kubelet[1648]: E0917 08:57:28.935917    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="310f797d-f8e1-4d73-abe1-05f4dc832ecc"
	Sep 17 08:57:29 addons-093168 kubelet[1648]: E0917 08:57:29.936326    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="e7497496-c2fe-46d3-98d2-378a076580ac"
	Sep 17 08:57:32 addons-093168 kubelet[1648]: E0917 08:57:32.294827    1648 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563452294598345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:57:32 addons-093168 kubelet[1648]: E0917 08:57:32.294858    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563452294598345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:57:35 addons-093168 kubelet[1648]: E0917 08:57:35.936728    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b6005bc-d2b8-4f48-bcf7-9878b2bf05d1"
	Sep 17 08:57:40 addons-093168 kubelet[1648]: E0917 08:57:40.936799    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="e7497496-c2fe-46d3-98d2-378a076580ac"
	Sep 17 08:57:42 addons-093168 kubelet[1648]: E0917 08:57:42.297951    1648 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563462297615280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:57:42 addons-093168 kubelet[1648]: E0917 08:57:42.297992    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563462297615280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:57:47 addons-093168 kubelet[1648]: E0917 08:57:47.627450    1648 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 17 08:57:47 addons-093168 kubelet[1648]: E0917 08:57:47.627519    1648 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 17 08:57:47 addons-093168 kubelet[1648]: E0917 08:57:47.627807    1648 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveRea
dOnly:nil,},VolumeMount{Name:kube-api-access-gzwmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod-restore_default(973c077a-45c1-4c85-bd62-419d8901a499): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 08:57:47 addons-093168 kubelet[1648]: E0917 08:57:47.629514    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="973c077a-45c1-4c85-bd62-419d8901a499"
	Sep 17 08:57:49 addons-093168 kubelet[1648]: E0917 08:57:49.936908    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b6005bc-d2b8-4f48-bcf7-9878b2bf05d1"
	Sep 17 08:57:52 addons-093168 kubelet[1648]: E0917 08:57:52.299790    1648 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563472299505209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:57:52 addons-093168 kubelet[1648]: E0917 08:57:52.299825    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563472299505209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:57:53 addons-093168 kubelet[1648]: E0917 08:57:53.936149    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="e7497496-c2fe-46d3-98d2-378a076580ac"
	
	
	==> storage-provisioner [6d7dbaef7a5cdfbfc36d8383927eea1f42c07e4bc01e6aa61dd711665433a6d2] <==
	I0917 08:39:42.145412       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 08:39:42.155383       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 08:39:42.155443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 08:39:42.163576       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 08:39:42.163731       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e63dab40-9e98-4f4f-adef-1b218f507e90", APIVersion:"v1", ResourceVersion:"911", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-093168_95b1dd30-5446-4b97-a4d9-95691f11eb5b became leader
	I0917 08:39:42.163849       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-093168_95b1dd30-5446-4b97-a4d9-95691f11eb5b!
	I0917 08:39:42.264554       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-093168_95b1dd30-5446-4b97-a4d9-95691f11eb5b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-093168 -n addons-093168
helpers_test.go:261: (dbg) Run:  kubectl --context addons-093168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox nginx task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-093168 describe pod busybox nginx task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-093168 describe pod busybox nginx task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp: exit status 1 (87.345614ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:41:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gdp6f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gdp6f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  16m                 default-scheduler  Successfully assigned default/busybox to addons-093168
	  Normal   Pulling    14m (x4 over 16m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     14m (x4 over 16m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     14m (x4 over 16m)   kubelet            Error: ErrImagePull
	  Warning  Failed     14m (x6 over 16m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    71s (x59 over 16m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:49:54 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dd297 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dd297:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m3s                  default-scheduler  Successfully assigned default/nginx to addons-093168
	  Warning  Failed     7m27s                 kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m37s (x4 over 8m2s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     102s (x4 over 7m27s)  kubelet            Error: ErrImagePull
	  Warning  Failed     102s (x3 over 5m24s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    67s (x7 over 7m26s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     67s (x7 over 7m26s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:50:25 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gzwmm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-gzwmm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  7m32s                default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-093168
	  Normal   BackOff    96s (x5 over 5m54s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     96s (x5 over 5m54s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    81s (x4 over 7m31s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     10s (x4 over 5m54s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     10s (x4 over 5m54s)  kubelet            Error: ErrImagePull
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:49:57 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9njfw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-9njfw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m                    default-scheduler  Successfully assigned default/test-local-path to addons-093168
	  Normal   Pulling    113s (x4 over 7m58s)  kubelet            Pulling image "busybox:stable"
	  Warning  Failed     41s (x4 over 6m26s)   kubelet            Failed to pull image "busybox:stable": loading manifest for target platform: reading manifest sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     41s (x4 over 6m26s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    17s (x7 over 6m25s)   kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     17s (x7 over 6m25s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4qdns" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pzmkp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-093168 describe pod busybox nginx task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp: exit status 1
--- FAIL: TestAddons/parallel/Ingress (482.96s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (288.95s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.68502ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-bmr95" [48e9bb6a-e161-4bfe-a8e4-14f5b970e50c] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004217133s
addons_test.go:417: (dbg) Run:  kubectl --context addons-093168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-093168 top pods -n kube-system: exit status 1 (65.543126ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7lhft, age: 11m56.540714942s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-093168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-093168 top pods -n kube-system: exit status 1 (67.631156ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7lhft, age: 11m58.239151534s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-093168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-093168 top pods -n kube-system: exit status 1 (66.422514ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7lhft, age: 12m2.159492324s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-093168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-093168 top pods -n kube-system: exit status 1 (65.981107ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7lhft, age: 12m8.919471291s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-093168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-093168 top pods -n kube-system: exit status 1 (65.036621ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7lhft, age: 12m18.755703943s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-093168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-093168 top pods -n kube-system: exit status 1 (65.131889ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7lhft, age: 12m32.167531508s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-093168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-093168 top pods -n kube-system: exit status 1 (64.500412ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7lhft, age: 12m57.105641487s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-093168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-093168 top pods -n kube-system: exit status 1 (66.114128ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7lhft, age: 13m38.886178084s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-093168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-093168 top pods -n kube-system: exit status 1 (66.471795ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7lhft, age: 14m50.174885926s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-093168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-093168 top pods -n kube-system: exit status 1 (66.690442ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7lhft, age: 16m2.462469751s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-093168 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-093168 top pods -n kube-system: exit status 1 (67.650716ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7lhft, age: 16m37.714012087s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-093168 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-093168
helpers_test.go:235: (dbg) docker inspect addons-093168:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926",
	        "Created": "2024-09-17T08:38:37.745470595Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 398166,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-17T08:38:37.853843611Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/hostname",
	        "HostsPath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/hosts",
	        "LogPath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926-json.log",
	        "Name": "/addons-093168",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-093168:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-093168",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe-init/diff:/var/lib/docker/overlay2/22ea169b69b771958d5aa21d4886a5f67242c32d10a387f6aa1fe4f8feab18cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-093168",
	                "Source": "/var/lib/docker/volumes/addons-093168/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-093168",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-093168",
	                "name.minikube.sigs.k8s.io": "addons-093168",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a27331437cb7fe2f3918d4f21c6d0976e37e8d2fb43412d6ed2152b1f3b4fa1d",
	            "SandboxKey": "/var/run/docker/netns/a27331437cb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-093168": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "b1ff23e6ca5d5222d1d8818100c713ebb16a506c62eb4243a00007b105030e92",
	                    "EndpointID": "6cf14f071fae4cd24a1dac2c9e7c6dc188dcb38a38a4daaba6556d5caaa91067",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-093168",
	                        "f0cc99258b2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-093168 -n addons-093168
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-093168 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-093168 logs -n 25: (1.225574252s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-963544   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | -p download-only-963544              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-963544              | download-only-963544   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | -o=json --download-only              | download-only-223077   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | -p download-only-223077              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-223077              | download-only-223077   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-963544              | download-only-963544   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-223077              | download-only-223077   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | --download-only -p                   | download-docker-146413 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | download-docker-146413               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-146413            | download-docker-146413 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | --download-only -p                   | binary-mirror-713061   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | binary-mirror-713061                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45413               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-713061              | binary-mirror-713061   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| addons  | disable dashboard -p                 | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| start   | -p addons-093168 --wait=true         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:41 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | -p addons-093168                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | -p addons-093168                     |                        |         |         |                     |                     |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-093168 ip                     | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:53 UTC | 17 Sep 24 08:53 UTC |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:53 UTC | 17 Sep 24 08:53 UTC |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| addons  | addons-093168 addons                 | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:55 UTC | 17 Sep 24 08:55 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 08:38:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 08:38:14.268718  397419 out.go:345] Setting OutFile to fd 1 ...
	I0917 08:38:14.268997  397419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:14.269006  397419 out.go:358] Setting ErrFile to fd 2...
	I0917 08:38:14.269011  397419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:14.269250  397419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 08:38:14.269979  397419 out.go:352] Setting JSON to false
	I0917 08:38:14.270971  397419 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8443,"bootTime":1726553851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 08:38:14.271094  397419 start.go:139] virtualization: kvm guest
	I0917 08:38:14.273237  397419 out.go:177] * [addons-093168] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 08:38:14.274641  397419 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 08:38:14.274672  397419 notify.go:220] Checking for updates...
	I0917 08:38:14.276997  397419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 08:38:14.277996  397419 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 08:38:14.278999  397419 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	I0917 08:38:14.280101  397419 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 08:38:14.281266  397419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 08:38:14.282616  397419 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 08:38:14.304074  397419 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 08:38:14.304175  397419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:14.349142  397419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 08:38:14.340459492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:14.349250  397419 docker.go:318] overlay module found
	I0917 08:38:14.351082  397419 out.go:177] * Using the docker driver based on user configuration
	I0917 08:38:14.352358  397419 start.go:297] selected driver: docker
	I0917 08:38:14.352372  397419 start.go:901] validating driver "docker" against <nil>
	I0917 08:38:14.352389  397419 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 08:38:14.353172  397419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:14.398286  397419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 08:38:14.389900591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:14.398447  397419 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 08:38:14.398700  397419 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 08:38:14.400294  397419 out.go:177] * Using Docker driver with root privileges
	I0917 08:38:14.401571  397419 cni.go:84] Creating CNI manager for ""
	I0917 08:38:14.401650  397419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 08:38:14.401663  397419 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 08:38:14.401757  397419 start.go:340] cluster config:
	{Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 08:38:14.402986  397419 out.go:177] * Starting "addons-093168" primary control-plane node in "addons-093168" cluster
	I0917 08:38:14.404072  397419 cache.go:121] Beginning downloading kic base image for docker with crio
	I0917 08:38:14.405262  397419 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0917 08:38:14.406317  397419 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 08:38:14.406352  397419 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 08:38:14.406353  397419 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0917 08:38:14.406362  397419 cache.go:56] Caching tarball of preloaded images
	I0917 08:38:14.406475  397419 preload.go:172] Found /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 08:38:14.406487  397419 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 08:38:14.406819  397419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/config.json ...
	I0917 08:38:14.406838  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/config.json: {Name:mk614388e178da61bf05196ce91ed40880ae45f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:14.422815  397419 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0917 08:38:14.422934  397419 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0917 08:38:14.422949  397419 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0917 08:38:14.422954  397419 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0917 08:38:14.422960  397419 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0917 08:38:14.422968  397419 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0917 08:38:25.896345  397419 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0917 08:38:25.896393  397419 cache.go:194] Successfully downloaded all kic artifacts
	I0917 08:38:25.896448  397419 start.go:360] acquireMachinesLock for addons-093168: {Name:mkac87ef08cf18f2f3037d42f97e6975bc93fa09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 08:38:25.896575  397419 start.go:364] duration metric: took 100.043µs to acquireMachinesLock for "addons-093168"
	I0917 08:38:25.896610  397419 start.go:93] Provisioning new machine with config: &{Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 08:38:25.896717  397419 start.go:125] createHost starting for "" (driver="docker")
	I0917 08:38:25.898703  397419 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0917 08:38:25.898987  397419 start.go:159] libmachine.API.Create for "addons-093168" (driver="docker")
	I0917 08:38:25.899037  397419 client.go:168] LocalClient.Create starting
	I0917 08:38:25.899156  397419 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem
	I0917 08:38:26.182492  397419 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem
	I0917 08:38:26.297180  397419 cli_runner.go:164] Run: docker network inspect addons-093168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 08:38:26.312692  397419 cli_runner.go:211] docker network inspect addons-093168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 08:38:26.312773  397419 network_create.go:284] running [docker network inspect addons-093168] to gather additional debugging logs...
	I0917 08:38:26.312794  397419 cli_runner.go:164] Run: docker network inspect addons-093168
	W0917 08:38:26.328447  397419 cli_runner.go:211] docker network inspect addons-093168 returned with exit code 1
	I0917 08:38:26.328492  397419 network_create.go:287] error running [docker network inspect addons-093168]: docker network inspect addons-093168: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-093168 not found
	I0917 08:38:26.328507  397419 network_create.go:289] output of [docker network inspect addons-093168]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-093168 not found
	
	** /stderr **
	I0917 08:38:26.328630  397419 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 08:38:26.344660  397419 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b00bc0}
	I0917 08:38:26.344706  397419 network_create.go:124] attempt to create docker network addons-093168 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 08:38:26.344757  397419 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-093168 addons-093168
	I0917 08:38:26.403233  397419 network_create.go:108] docker network addons-093168 192.168.49.0/24 created
	I0917 08:38:26.403277  397419 kic.go:121] calculated static IP "192.168.49.2" for the "addons-093168" container
	I0917 08:38:26.403354  397419 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 08:38:26.419565  397419 cli_runner.go:164] Run: docker volume create addons-093168 --label name.minikube.sigs.k8s.io=addons-093168 --label created_by.minikube.sigs.k8s.io=true
	I0917 08:38:26.436382  397419 oci.go:103] Successfully created a docker volume addons-093168
	I0917 08:38:26.436456  397419 cli_runner.go:164] Run: docker run --rm --name addons-093168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093168 --entrypoint /usr/bin/test -v addons-093168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0917 08:38:33.360703  397419 cli_runner.go:217] Completed: docker run --rm --name addons-093168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093168 --entrypoint /usr/bin/test -v addons-093168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (6.924191678s)
	I0917 08:38:33.360734  397419 oci.go:107] Successfully prepared a docker volume addons-093168
	I0917 08:38:33.360748  397419 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 08:38:33.360770  397419 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 08:38:33.360820  397419 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-093168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 08:38:37.679996  397419 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-093168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.31913353s)
	I0917 08:38:37.680031  397419 kic.go:203] duration metric: took 4.319258144s to extract preloaded images to volume ...
	W0917 08:38:37.680167  397419 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0917 08:38:37.680264  397419 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 08:38:37.730224  397419 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-093168 --name addons-093168 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093168 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-093168 --network addons-093168 --ip 192.168.49.2 --volume addons-093168:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0917 08:38:38.015246  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Running}}
	I0917 08:38:38.033247  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:38.053229  397419 cli_runner.go:164] Run: docker exec addons-093168 stat /var/lib/dpkg/alternatives/iptables
	I0917 08:38:38.096763  397419 oci.go:144] the created container "addons-093168" has a running status.
	I0917 08:38:38.096799  397419 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa...
	I0917 08:38:38.316707  397419 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 08:38:38.338702  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:38.370614  397419 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 08:38:38.370640  397419 kic_runner.go:114] Args: [docker exec --privileged addons-093168 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 08:38:38.443014  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:38.468083  397419 machine.go:93] provisionDockerMachine start ...
	I0917 08:38:38.468181  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:38.487785  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:38.488024  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:38.488039  397419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 08:38:38.683369  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-093168
	
	I0917 08:38:38.683409  397419 ubuntu.go:169] provisioning hostname "addons-093168"
	I0917 08:38:38.683487  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:38.701314  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:38.701561  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:38.701586  397419 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-093168 && echo "addons-093168" | sudo tee /etc/hostname
	I0917 08:38:38.842294  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-093168
	
	I0917 08:38:38.842367  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:38.858454  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:38.858651  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:38.858675  397419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-093168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-093168/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-093168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 08:38:38.987912  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 08:38:38.987964  397419 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19648-389277/.minikube CaCertPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19648-389277/.minikube}
	I0917 08:38:38.988009  397419 ubuntu.go:177] setting up certificates
	I0917 08:38:38.988022  397419 provision.go:84] configureAuth start
	I0917 08:38:38.988088  397419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093168
	I0917 08:38:39.005336  397419 provision.go:143] copyHostCerts
	I0917 08:38:39.005415  397419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19648-389277/.minikube/key.pem (1679 bytes)
	I0917 08:38:39.005548  397419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19648-389277/.minikube/ca.pem (1082 bytes)
	I0917 08:38:39.005641  397419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19648-389277/.minikube/cert.pem (1123 bytes)
	I0917 08:38:39.005712  397419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19648-389277/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca-key.pem org=jenkins.addons-093168 san=[127.0.0.1 192.168.49.2 addons-093168 localhost minikube]
	I0917 08:38:39.090312  397419 provision.go:177] copyRemoteCerts
	I0917 08:38:39.090393  397419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 08:38:39.090456  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.106972  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.200856  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 08:38:39.222438  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 08:38:39.243612  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 08:38:39.265193  397419 provision.go:87] duration metric: took 277.150434ms to configureAuth
	I0917 08:38:39.265224  397419 ubuntu.go:193] setting minikube options for container-runtime
	I0917 08:38:39.265409  397419 config.go:182] Loaded profile config "addons-093168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 08:38:39.265521  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.282135  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:39.282384  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:39.282416  397419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 08:38:39.504192  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 08:38:39.504224  397419 machine.go:96] duration metric: took 1.036114607s to provisionDockerMachine
	I0917 08:38:39.504238  397419 client.go:171] duration metric: took 13.605190317s to LocalClient.Create
	I0917 08:38:39.504260  397419 start.go:167] duration metric: took 13.605271001s to libmachine.API.Create "addons-093168"
	I0917 08:38:39.504270  397419 start.go:293] postStartSetup for "addons-093168" (driver="docker")
	I0917 08:38:39.504289  397419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 08:38:39.504344  397419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 08:38:39.504394  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.522028  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.616778  397419 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 08:38:39.619852  397419 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 08:38:39.619881  397419 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 08:38:39.619889  397419 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 08:38:39.619897  397419 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0917 08:38:39.619908  397419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19648-389277/.minikube/addons for local assets ...
	I0917 08:38:39.619990  397419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19648-389277/.minikube/files for local assets ...
	I0917 08:38:39.620018  397419 start.go:296] duration metric: took 115.734968ms for postStartSetup
	I0917 08:38:39.620325  397419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093168
	I0917 08:38:39.637039  397419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/config.json ...
	I0917 08:38:39.637313  397419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 08:38:39.637369  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.653547  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.748768  397419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 08:38:39.752898  397419 start.go:128] duration metric: took 13.856163014s to createHost
	I0917 08:38:39.752925  397419 start.go:83] releasing machines lock for "addons-093168", held for 13.856335009s
	I0917 08:38:39.752987  397419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093168
	I0917 08:38:39.769324  397419 ssh_runner.go:195] Run: cat /version.json
	I0917 08:38:39.769390  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.769443  397419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 08:38:39.769521  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.786951  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.787867  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.941853  397419 ssh_runner.go:195] Run: systemctl --version
	I0917 08:38:39.946158  397419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 08:38:40.084473  397419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 08:38:40.088727  397419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 08:38:40.106449  397419 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 08:38:40.106528  397419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 08:38:40.132230  397419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 08:38:40.132261  397419 start.go:495] detecting cgroup driver to use...
	I0917 08:38:40.132294  397419 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 08:38:40.132351  397419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 08:38:40.146387  397419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 08:38:40.156232  397419 docker.go:217] disabling cri-docker service (if available) ...
	I0917 08:38:40.156282  397419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 08:38:40.168347  397419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 08:38:40.181162  397419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 08:38:40.257135  397419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 08:38:40.333605  397419 docker.go:233] disabling docker service ...
	I0917 08:38:40.333673  397419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 08:38:40.351601  397419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 08:38:40.362162  397419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 08:38:40.440587  397419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 08:38:40.525972  397419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 08:38:40.536529  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 08:38:40.551093  397419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 08:38:40.551153  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.559832  397419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 08:38:40.559898  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.568567  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.577380  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.585958  397419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 08:38:40.594312  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.603119  397419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.617231  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.626110  397419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 08:38:40.634005  397419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 08:38:40.641779  397419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:40.712061  397419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 08:38:40.806565  397419 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 08:38:40.806642  397419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 08:38:40.809970  397419 start.go:563] Will wait 60s for crictl version
	I0917 08:38:40.810032  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:38:40.812917  397419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 08:38:40.845887  397419 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 08:38:40.845982  397419 ssh_runner.go:195] Run: crio --version
	I0917 08:38:40.880638  397419 ssh_runner.go:195] Run: crio --version
	I0917 08:38:40.915800  397419 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0917 08:38:40.917229  397419 cli_runner.go:164] Run: docker network inspect addons-093168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 08:38:40.933605  397419 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 08:38:40.937163  397419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 08:38:40.947226  397419 kubeadm.go:883] updating cluster {Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 08:38:40.947379  397419 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 08:38:40.947455  397419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 08:38:41.008460  397419 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 08:38:41.008482  397419 crio.go:433] Images already preloaded, skipping extraction
	I0917 08:38:41.008524  397419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 08:38:41.040345  397419 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 08:38:41.040370  397419 cache_images.go:84] Images are preloaded, skipping loading
	I0917 08:38:41.040378  397419 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0917 08:38:41.040480  397419 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-093168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 08:38:41.040565  397419 ssh_runner.go:195] Run: crio config
	I0917 08:38:41.080761  397419 cni.go:84] Creating CNI manager for ""
	I0917 08:38:41.080783  397419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 08:38:41.080795  397419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 08:38:41.080819  397419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-093168 NodeName:addons-093168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 08:38:41.080967  397419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-093168"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 08:38:41.081023  397419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 08:38:41.089456  397419 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 08:38:41.089531  397419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 08:38:41.097438  397419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 08:38:41.113372  397419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 08:38:41.129326  397419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0917 08:38:41.144885  397419 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0917 08:38:41.147998  397419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 08:38:41.157624  397419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:41.237475  397419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 08:38:41.249661  397419 certs.go:68] Setting up /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168 for IP: 192.168.49.2
	I0917 08:38:41.249683  397419 certs.go:194] generating shared ca certs ...
	I0917 08:38:41.249699  397419 certs.go:226] acquiring lock for ca certs: {Name:mk8da29d5216ae8373400245c621790543881904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.249825  397419 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key
	I0917 08:38:41.614404  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt ...
	I0917 08:38:41.614440  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt: {Name:mkd45d6a60b00dd159e65c0f1b6c2e5a8afabc01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.614666  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key ...
	I0917 08:38:41.614685  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key: {Name:mk5291de481583f940222c6612a96e62ccd87eec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.614788  397419 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key
	I0917 08:38:41.754351  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.crt ...
	I0917 08:38:41.754383  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.crt: {Name:mk27ce36d6db90e160bdb0276068ed953effdbf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.754586  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key ...
	I0917 08:38:41.754606  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key: {Name:mk3afa86519521f4fca302906407d013abfb0d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.754709  397419 certs.go:256] generating profile certs ...
	I0917 08:38:41.754798  397419 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.key
	I0917 08:38:41.754829  397419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt with IP's: []
	I0917 08:38:42.064154  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt ...
	I0917 08:38:42.064185  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: {Name:mk5cb5afe904908b0cba1bf17d824eee5c984153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.064362  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.key ...
	I0917 08:38:42.064377  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.key: {Name:mkf2e14b11acd2448049e231dd4ead7716664bd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.064476  397419 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d
	I0917 08:38:42.064507  397419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0917 08:38:42.261028  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d ...
	I0917 08:38:42.261067  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d: {Name:mk077ce39ea3bb757e6d6ad979b544d7da0b437c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.261244  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d ...
	I0917 08:38:42.261257  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d: {Name:mk33433d67eea38775352092fed9c6a72038761a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.261329  397419 certs.go:381] copying /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d -> /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt
	I0917 08:38:42.261432  397419 certs.go:385] copying /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d -> /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key
	I0917 08:38:42.261485  397419 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key
	I0917 08:38:42.261504  397419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt with IP's: []
	I0917 08:38:42.508375  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt ...
	I0917 08:38:42.508413  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt: {Name:mk89431354833730cad316e358f6ad32f98671ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.508622  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key ...
	I0917 08:38:42.508638  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key: {Name:mk49266541348c002ddfe954fcac3e31b23d5e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.508851  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 08:38:42.508900  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem (1082 bytes)
	I0917 08:38:42.508938  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem (1123 bytes)
	I0917 08:38:42.508966  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/key.pem (1679 bytes)
	I0917 08:38:42.509614  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 08:38:42.532076  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 08:38:42.553868  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 08:38:42.575679  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 08:38:42.597095  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 08:38:42.618358  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 08:38:42.639563  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 08:38:42.660637  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 08:38:42.681627  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 08:38:42.702968  397419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 08:38:42.718889  397419 ssh_runner.go:195] Run: openssl version
	I0917 08:38:42.724037  397419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 08:38:42.732397  397419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:42.735486  397419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:42.735536  397419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:42.741586  397419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 08:38:42.749881  397419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 08:38:42.752874  397419 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 08:38:42.752930  397419 kubeadm.go:392] StartCluster: {Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 08:38:42.753025  397419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 08:38:42.753085  397419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 08:38:42.786903  397419 cri.go:89] found id: ""
	I0917 08:38:42.786985  397419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 08:38:42.796179  397419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 08:38:42.804749  397419 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 08:38:42.804799  397419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 08:38:42.812984  397419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 08:38:42.813000  397419 kubeadm.go:157] found existing configuration files:
	
	I0917 08:38:42.813037  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 08:38:42.820866  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 08:38:42.820930  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 08:38:42.828240  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 08:38:42.835643  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 08:38:42.835737  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 08:38:42.843259  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 08:38:42.851080  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 08:38:42.851131  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 08:38:42.858437  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 08:38:42.866098  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 08:38:42.866156  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 08:38:42.873252  397419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 08:38:42.908386  397419 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 08:38:42.908464  397419 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 08:38:42.923732  397419 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 08:38:42.923800  397419 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0917 08:38:42.923834  397419 kubeadm.go:310] OS: Linux
	I0917 08:38:42.923879  397419 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 08:38:42.923964  397419 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0917 08:38:42.924025  397419 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 08:38:42.924093  397419 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 08:38:42.924167  397419 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 08:38:42.924236  397419 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 08:38:42.924302  397419 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 08:38:42.924375  397419 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 08:38:42.924442  397419 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0917 08:38:42.973444  397419 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 08:38:42.973610  397419 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 08:38:42.973749  397419 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 08:38:42.979391  397419 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 08:38:42.982351  397419 out.go:235]   - Generating certificates and keys ...
	I0917 08:38:42.982445  397419 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 08:38:42.982558  397419 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 08:38:43.304222  397419 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 08:38:43.356991  397419 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 08:38:43.472470  397419 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 08:38:43.631625  397419 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 08:38:43.778369  397419 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 08:38:43.778571  397419 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-093168 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 08:38:44.236292  397419 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 08:38:44.236448  397419 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-093168 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 08:38:44.386759  397419 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 08:38:44.547662  397419 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 08:38:45.256381  397419 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 08:38:45.256470  397419 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 08:38:45.352447  397419 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 08:38:45.496534  397419 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 08:38:45.783093  397419 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 08:38:45.948400  397419 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 08:38:46.126268  397419 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 08:38:46.126739  397419 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 08:38:46.129290  397419 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 08:38:46.131498  397419 out.go:235]   - Booting up control plane ...
	I0917 08:38:46.131624  397419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 08:38:46.131735  397419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 08:38:46.131825  397419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 08:38:46.139890  397419 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 08:38:46.145973  397419 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 08:38:46.146041  397419 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 08:38:46.229694  397419 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 08:38:46.229838  397419 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 08:38:46.732374  397419 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.404175ms
	I0917 08:38:46.732502  397419 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 08:38:51.232483  397419 kubeadm.go:310] [api-check] The API server is healthy after 4.501470708s
	I0917 08:38:51.243357  397419 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 08:38:51.254150  397419 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 08:38:51.272346  397419 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 08:38:51.272569  397419 kubeadm.go:310] [mark-control-plane] Marking the node addons-093168 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 08:38:51.279966  397419 kubeadm.go:310] [bootstrap-token] Using token: k80no8.z164l1wfcaclt3ve
	I0917 08:38:51.281525  397419 out.go:235]   - Configuring RBAC rules ...
	I0917 08:38:51.281680  397419 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 08:38:51.284683  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 08:38:51.290003  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 08:38:51.293675  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 08:38:51.296125  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 08:38:51.298653  397419 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 08:38:51.638681  397419 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 08:38:52.057839  397419 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 08:38:52.638211  397419 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 08:38:52.639067  397419 kubeadm.go:310] 
	I0917 08:38:52.639151  397419 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 08:38:52.639161  397419 kubeadm.go:310] 
	I0917 08:38:52.639256  397419 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 08:38:52.639296  397419 kubeadm.go:310] 
	I0917 08:38:52.639346  397419 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 08:38:52.639417  397419 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 08:38:52.639470  397419 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 08:38:52.639478  397419 kubeadm.go:310] 
	I0917 08:38:52.639522  397419 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 08:38:52.639529  397419 kubeadm.go:310] 
	I0917 08:38:52.639568  397419 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 08:38:52.639593  397419 kubeadm.go:310] 
	I0917 08:38:52.639638  397419 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 08:38:52.639707  397419 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 08:38:52.639770  397419 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 08:38:52.639776  397419 kubeadm.go:310] 
	I0917 08:38:52.639844  397419 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 08:38:52.639938  397419 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 08:38:52.639972  397419 kubeadm.go:310] 
	I0917 08:38:52.640081  397419 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k80no8.z164l1wfcaclt3ve \
	I0917 08:38:52.640203  397419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:df9ded58c525a6d55df91cd644932b8a694d03f6beda3e691beb74ea1851cf09 \
	I0917 08:38:52.640238  397419 kubeadm.go:310] 	--control-plane 
	I0917 08:38:52.640248  397419 kubeadm.go:310] 
	I0917 08:38:52.640345  397419 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 08:38:52.640356  397419 kubeadm.go:310] 
	I0917 08:38:52.640453  397419 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k80no8.z164l1wfcaclt3ve \
	I0917 08:38:52.640571  397419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:df9ded58c525a6d55df91cd644932b8a694d03f6beda3e691beb74ea1851cf09 
	I0917 08:38:52.642642  397419 kubeadm.go:310] W0917 08:38:42.905770    1305 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 08:38:52.643061  397419 kubeadm.go:310] W0917 08:38:42.906409    1305 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 08:38:52.643311  397419 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0917 08:38:52.643438  397419 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 08:38:52.643454  397419 cni.go:84] Creating CNI manager for ""
	I0917 08:38:52.643464  397419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 08:38:52.645324  397419 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0917 08:38:52.646624  397419 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 08:38:52.650315  397419 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0917 08:38:52.650335  397419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 08:38:52.667218  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 08:38:52.889823  397419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 08:38:52.889885  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:52.889918  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-093168 minikube.k8s.io/updated_at=2024_09_17T08_38_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61 minikube.k8s.io/name=addons-093168 minikube.k8s.io/primary=true
	I0917 08:38:52.897123  397419 ops.go:34] apiserver oom_adj: -16
	I0917 08:38:53.039509  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:53.539727  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:54.039909  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:54.539969  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:55.040209  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:55.540163  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:56.039997  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:56.540545  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:57.039787  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:57.104143  397419 kubeadm.go:1113] duration metric: took 4.214320429s to wait for elevateKubeSystemPrivileges
	I0917 08:38:57.104195  397419 kubeadm.go:394] duration metric: took 14.351272056s to StartCluster
	I0917 08:38:57.104218  397419 settings.go:142] acquiring lock: {Name:mk95cfba95882d4e40150b5e054772c8fe045040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:57.104356  397419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 08:38:57.104769  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/kubeconfig: {Name:mk341f12644f68f3679935ee94cc49d156e11570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:57.105015  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 08:38:57.105016  397419 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 08:38:57.105108  397419 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 08:38:57.105239  397419 config.go:182] Loaded profile config "addons-093168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 08:38:57.105256  397419 addons.go:69] Setting cloud-spanner=true in profile "addons-093168"
	I0917 08:38:57.105271  397419 addons.go:69] Setting gcp-auth=true in profile "addons-093168"
	I0917 08:38:57.105277  397419 addons.go:234] Setting addon cloud-spanner=true in "addons-093168"
	I0917 08:38:57.105276  397419 addons.go:69] Setting storage-provisioner=true in profile "addons-093168"
	I0917 08:38:57.105278  397419 addons.go:69] Setting volumesnapshots=true in profile "addons-093168"
	I0917 08:38:57.105298  397419 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-093168"
	I0917 08:38:57.105238  397419 addons.go:69] Setting yakd=true in profile "addons-093168"
	I0917 08:38:57.105296  397419 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-093168"
	I0917 08:38:57.105312  397419 addons.go:69] Setting registry=true in profile "addons-093168"
	I0917 08:38:57.105312  397419 addons.go:234] Setting addon volumesnapshots=true in "addons-093168"
	I0917 08:38:57.105317  397419 addons.go:234] Setting addon yakd=true in "addons-093168"
	I0917 08:38:57.105321  397419 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-093168"
	I0917 08:38:57.105323  397419 addons.go:69] Setting helm-tiller=true in profile "addons-093168"
	I0917 08:38:57.105332  397419 addons.go:69] Setting metrics-server=true in profile "addons-093168"
	I0917 08:38:57.105335  397419 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-093168"
	I0917 08:38:57.105259  397419 addons.go:69] Setting volcano=true in profile "addons-093168"
	I0917 08:38:57.105344  397419 addons.go:234] Setting addon metrics-server=true in "addons-093168"
	I0917 08:38:57.105245  397419 addons.go:69] Setting inspektor-gadget=true in profile "addons-093168"
	I0917 08:38:57.105347  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105351  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105324  397419 addons.go:234] Setting addon registry=true in "addons-093168"
	I0917 08:38:57.105357  397419 addons.go:234] Setting addon inspektor-gadget=true in "addons-093168"
	I0917 08:38:57.105362  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105353  397419 addons.go:234] Setting addon volcano=true in "addons-093168"
	I0917 08:38:57.105486  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105291  397419 mustload.go:65] Loading cluster: addons-093168
	I0917 08:38:57.105336  397419 addons.go:234] Setting addon helm-tiller=true in "addons-093168"
	I0917 08:38:57.105608  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105707  397419 config.go:182] Loaded profile config "addons-093168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 08:38:57.105371  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105931  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105935  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105960  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105377  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.106050  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.106193  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105960  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.106458  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.106627  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105313  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105345  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.107248  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105376  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105250  397419 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-093168"
	I0917 08:38:57.108052  397419 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-093168"
	I0917 08:38:57.108362  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105380  397419 addons.go:69] Setting default-storageclass=true in profile "addons-093168"
	I0917 08:38:57.105302  397419 addons.go:234] Setting addon storage-provisioner=true in "addons-093168"
	I0917 08:38:57.105388  397419 addons.go:69] Setting ingress-dns=true in profile "addons-093168"
	I0917 08:38:57.105386  397419 addons.go:69] Setting ingress=true in profile "addons-093168"
	I0917 08:38:57.108644  397419 addons.go:234] Setting addon ingress=true in "addons-093168"
	I0917 08:38:57.108680  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.108700  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.108747  397419 addons.go:234] Setting addon ingress-dns=true in "addons-093168"
	I0917 08:38:57.108788  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.108821  397419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-093168"
	I0917 08:38:57.112690  397419 out.go:177] * Verifying Kubernetes components...
	I0917 08:38:57.114189  397419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:57.124402  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.124402  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.124587  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.125036  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.125084  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.125993  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.143502  397419 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 08:38:57.144872  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 08:38:57.144901  397419 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 08:38:57.144980  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	W0917 08:38:57.150681  397419 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0917 08:38:57.153691  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.155722  397419 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0917 08:38:57.159231  397419 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 08:38:57.159256  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0917 08:38:57.159314  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.172289  397419 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 08:38:57.176642  397419 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 08:38:57.176666  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 08:38:57.176733  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.193988  397419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 08:38:57.196004  397419 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 08:38:57.197115  397419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 08:38:57.197136  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 08:38:57.197200  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.202125  397419 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 08:38:57.203455  397419 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 08:38:57.203530  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 08:38:57.203679  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.204660  397419 addons.go:234] Setting addon default-storageclass=true in "addons-093168"
	I0917 08:38:57.204707  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.204824  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 08:38:57.205196  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.207284  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:38:57.207449  397419 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 08:38:57.208612  397419 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 08:38:57.208633  397419 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 08:38:57.208701  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.208883  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:38:57.210517  397419 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 08:38:57.210538  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 08:38:57.210595  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.210853  397419 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 08:38:57.212148  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 08:38:57.212167  397419 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 08:38:57.212221  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.216414  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.219236  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 08:38:57.221033  397419 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-093168"
	I0917 08:38:57.221085  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.221137  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 08:38:57.221157  397419 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 08:38:57.221227  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.221586  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.221963  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 08:38:57.223885  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 08:38:57.225253  397419 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 08:38:57.226499  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 08:38:57.226722  397419 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 08:38:57.226737  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 08:38:57.226802  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.229771  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 08:38:57.229842  397419 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 08:38:57.231204  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 08:38:57.231925  397419 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 08:38:57.231954  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 08:38:57.232015  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.240168  397419 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 08:38:57.240188  397419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 08:38:57.240249  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.251934  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.253019  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.256107  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.256961  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 08:38:57.270556  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 08:38:57.272877  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 08:38:57.274130  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 08:38:57.274138  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.274160  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 08:38:57.274232  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.286114  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.286432  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.286552  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.287928  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.292989  397419 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 08:38:57.293246  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.295525  397419 out.go:177]   - Using image docker.io/busybox:stable
	I0917 08:38:57.295767  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.297062  397419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 08:38:57.297077  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 08:38:57.297117  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.299372  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.306226  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.314733  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	W0917 08:38:57.337065  397419 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 08:38:57.337105  397419 retry.go:31] will retry after 135.437372ms: ssh: handshake failed: EOF
	I0917 08:38:57.346335  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 08:38:57.356789  397419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 08:38:57.538116  397419 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 08:38:57.538148  397419 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0917 08:38:57.541546  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 08:38:57.642930  397419 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 08:38:57.642961  397419 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0917 08:38:57.652875  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 08:38:57.652902  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 08:38:57.744251  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 08:38:57.752468  397419 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 08:38:57.752499  397419 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 08:38:57.753674  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 08:38:57.753698  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 08:38:57.833558  397419 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 08:38:57.833662  397419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 08:38:57.834064  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 08:38:57.835232  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 08:38:57.842341  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 08:38:57.842375  397419 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 08:38:57.849540  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 08:38:57.853917  397419 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 08:38:57.853947  397419 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 08:38:57.936443  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 08:38:57.936758  397419 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 08:38:57.936784  397419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 08:38:57.938952  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 08:38:57.941233  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 08:38:57.941258  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 08:38:58.033712  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 08:38:58.034229  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 08:38:58.034295  397419 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 08:38:58.046437  397419 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 08:38:58.046529  397419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 08:38:58.047136  397419 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 08:38:58.047196  397419 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 08:38:58.133693  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 08:38:58.133782  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 08:38:58.139956  397419 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 08:38:58.139985  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 08:38:58.233802  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 08:38:58.233848  397419 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 08:38:58.252638  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 08:38:58.252687  397419 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 08:38:58.254386  397419 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 08:38:58.254464  397419 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 08:38:58.333784  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 08:38:58.333878  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 08:38:58.449224  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 08:38:58.449259  397419 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 08:38:58.449658  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 08:38:58.548889  397419 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:38:58.548923  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 08:38:58.633498  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 08:38:58.633532  397419 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 08:38:58.633842  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 08:38:58.633864  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 08:38:58.634541  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 08:38:58.750791  397419 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 08:38:58.750827  397419 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 08:38:58.936229  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:38:59.233524  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 08:38:59.233625  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 08:38:59.333560  397419 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 08:38:59.333595  397419 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 08:38:59.653548  397419 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 08:38:59.653582  397419 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 08:38:59.654019  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 08:38:59.654039  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 08:38:59.750974  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 08:38:59.844245  397419 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.497868768s)
	I0917 08:38:59.844279  397419 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0917 08:38:59.845507  397419 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.48868759s)
	I0917 08:38:59.846428  397419 node_ready.go:35] waiting up to 6m0s for node "addons-093168" to be "Ready" ...
	I0917 08:39:00.150766  397419 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 08:39:00.150864  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 08:39:00.241261  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 08:39:00.241385  397419 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 08:39:00.434396  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 08:39:00.434751  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 08:39:00.434837  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 08:39:00.550189  397419 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-093168" context rescaled to 1 replicas
	I0917 08:39:00.748755  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 08:39:00.748843  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 08:39:00.937410  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 08:39:00.937442  397419 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 08:39:01.233803  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 08:39:01.943544  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:03.261179  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.719582492s)
	I0917 08:39:03.261217  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.516878812s)
	I0917 08:39:03.261224  397419 addons.go:475] Verifying addon ingress=true in "addons-093168"
	I0917 08:39:03.261298  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.427173682s)
	I0917 08:39:03.261369  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.426103213s)
	I0917 08:39:03.261406  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.411830401s)
	I0917 08:39:03.261493  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.325021448s)
	I0917 08:39:03.261534  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.322551933s)
	I0917 08:39:03.261613  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.227807299s)
	I0917 08:39:03.261653  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.811965691s)
	I0917 08:39:03.261677  397419 addons.go:475] Verifying addon registry=true in "addons-093168"
	I0917 08:39:03.261733  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.627156118s)
	I0917 08:39:03.261799  397419 addons.go:475] Verifying addon metrics-server=true in "addons-093168"
	I0917 08:39:03.263039  397419 out.go:177] * Verifying ingress addon...
	I0917 08:39:03.264106  397419 out.go:177] * Verifying registry addon...
	I0917 08:39:03.265798  397419 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 08:39:03.266577  397419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 08:39:03.338558  397419 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 08:39:03.338666  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:03.338842  397419 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 08:39:03.338910  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 08:39:03.344429  397419 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0917 08:39:03.835535  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:03.868020  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.931736693s)
	W0917 08:39:03.868122  397419 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 08:39:03.868142  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.117119927s)
	I0917 08:39:03.868181  397419 retry.go:31] will retry after 226.647603ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 08:39:03.868254  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.433802493s)
	I0917 08:39:03.869652  397419 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-093168 service yakd-dashboard -n yakd-dashboard
	
	I0917 08:39:03.934770  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:04.095668  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:39:04.269371  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:04.269859  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:04.350132  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:04.360728  397419 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 08:39:04.360808  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:39:04.384783  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:39:04.471408  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.23753895s)
	I0917 08:39:04.471460  397419 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-093168"
	I0917 08:39:04.473008  397419 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 08:39:04.475211  397419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 08:39:04.535330  397419 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 08:39:04.535353  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:04.598789  397419 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 08:39:04.615582  397419 addons.go:234] Setting addon gcp-auth=true in "addons-093168"
	I0917 08:39:04.615652  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:39:04.616089  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:39:04.633132  397419 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 08:39:04.633192  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:39:04.651065  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:39:04.769973  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:04.770233  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:05.035291  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:05.335175  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:05.336078  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:05.535256  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:05.769510  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:05.769763  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:05.979262  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:06.269556  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:06.269756  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:06.350348  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:06.479032  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:06.769819  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:06.770387  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:06.979151  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:06.991964  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.89623192s)
	I0917 08:39:06.992009  397419 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.358851016s)
	I0917 08:39:06.993965  397419 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 08:39:06.995369  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:39:06.996678  397419 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 08:39:06.996699  397419 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 08:39:07.050138  397419 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 08:39:07.050166  397419 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 08:39:07.070212  397419 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 08:39:07.070239  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 08:39:07.088585  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 08:39:07.269903  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:07.270150  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:07.478566  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:07.742409  397419 addons.go:475] Verifying addon gcp-auth=true in "addons-093168"
	I0917 08:39:07.743971  397419 out.go:177] * Verifying gcp-auth addon...
	I0917 08:39:07.746772  397419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 08:39:07.749628  397419 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 08:39:07.749648  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:07.850058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:07.850470  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:07.980638  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:08.250181  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:08.269219  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:08.269486  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:08.478757  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:08.750637  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:08.769245  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:08.769763  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:08.849706  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:08.978545  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:09.250459  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:09.269495  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:09.269663  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:09.479237  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:09.749689  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:09.769399  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:09.769720  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:09.978863  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:10.250410  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:10.269526  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:10.269619  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:10.478837  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:10.750940  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:10.769805  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:10.770515  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:10.979280  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:11.249995  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:11.269719  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:11.270190  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:11.350491  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:11.478320  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:11.750247  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:11.769390  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:11.769429  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:11.978986  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:12.250516  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:12.269587  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:12.269693  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:12.480184  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:12.750404  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:12.769444  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:12.769591  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:12.978948  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:13.250817  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:13.269637  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:13.270016  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:13.479104  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:13.749738  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:13.769523  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:13.769820  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:13.850119  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:13.978949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:14.249884  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:14.269638  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:14.270062  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:14.479204  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:14.749928  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:14.769438  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:14.769821  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:14.978839  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:15.250562  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:15.269409  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:15.269947  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:15.478860  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:15.750835  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:15.769345  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:15.770015  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:15.850276  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:15.979293  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:16.250064  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:16.269826  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:16.270274  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:16.478595  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:16.750278  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:16.769441  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:16.769627  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:16.978785  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:17.249585  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:17.269341  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:17.269848  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:17.479260  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:17.749952  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:17.769578  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:17.769936  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:17.979325  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:18.249779  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:18.269465  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:18.269775  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:18.350075  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:18.478976  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:18.750758  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:18.769496  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:18.769979  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:18.979120  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:19.249745  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:19.269362  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:19.269944  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:19.479390  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:19.749971  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:19.769917  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:19.770115  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:19.978384  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:20.250150  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:20.269613  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:20.270040  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:20.479591  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:20.750572  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:20.769329  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:20.769808  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:20.849500  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:20.978496  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:21.250173  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:21.269174  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:21.269534  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:21.478769  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:21.751128  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:21.769357  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:21.769371  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:21.978913  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:22.250688  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:22.269349  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:22.269695  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:22.478881  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:22.750753  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:22.769486  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:22.769809  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:22.849938  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:22.981047  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:23.249913  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:23.269440  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:23.269919  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:23.478892  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:23.750856  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:23.769354  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:23.769865  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:23.978955  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:24.249899  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:24.269545  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:24.269991  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:24.479144  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:24.750022  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:24.769833  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:24.770464  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:24.978298  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:25.250252  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:25.269224  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:25.269557  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:25.350289  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:25.479127  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:25.749639  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:25.769205  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:25.769585  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:25.979064  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:26.250038  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:26.269663  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:26.270152  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:26.478995  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:26.750285  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:26.769308  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:26.769370  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:26.978745  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:27.250676  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:27.269322  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:27.269652  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:27.478412  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:27.750691  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:27.769200  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:27.769604  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:27.849933  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:27.979206  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:28.249964  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:28.269520  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:28.269919  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:28.479193  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:28.749933  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:28.769877  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:28.770211  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:28.979141  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:29.249874  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:29.270072  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:29.270348  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:29.478073  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:29.749899  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:29.769818  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:29.770374  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:29.979288  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:30.250272  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:30.269500  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:30.269546  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:30.350342  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:30.479086  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:30.749787  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:30.769541  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:30.770013  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:30.979093  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:31.250841  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:31.269421  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:31.269882  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:31.479027  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:31.749892  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:31.769497  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:31.769834  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:31.979224  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:32.250379  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:32.269381  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:32.269400  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:32.479357  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:32.750376  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:32.769602  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:32.769757  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:32.850423  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:32.979114  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:33.251004  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:33.269908  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:33.270175  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:33.479600  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:33.749949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:33.769584  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:33.770008  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:33.979236  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:34.250012  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:34.269687  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:34.270180  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:34.479255  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:34.750023  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:34.769580  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:34.770002  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:34.978387  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:35.250069  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:35.269828  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:35.270241  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:35.349451  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:35.478206  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:35.749945  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:35.769452  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:35.769865  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:35.978859  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:36.250835  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:36.269592  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:36.269917  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:36.478473  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:36.750428  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:36.769595  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:36.769685  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:36.978362  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:37.250516  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:37.269304  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:37.269681  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:37.350217  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:37.479043  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:37.750460  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:37.769597  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:37.769948  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:37.978771  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:38.250668  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:38.269338  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:38.269667  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:38.478938  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:38.750692  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:38.769540  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:38.770044  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:38.979152  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:39.249775  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:39.269195  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:39.269607  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:39.478771  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:39.750626  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:39.769136  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:39.769575  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:39.850038  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:39.979047  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:40.249695  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:40.269441  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:40.269779  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:40.479084  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:40.749817  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:40.769332  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:40.769870  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:40.978708  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:41.250949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:41.269314  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:41.269830  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:41.480399  397419 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 08:39:41.480422  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:41.760397  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:41.837192  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:41.837670  397419 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 08:39:41.837689  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:41.849891  397419 node_ready.go:49] node "addons-093168" has status "Ready":"True"
	I0917 08:39:41.849914  397419 node_ready.go:38] duration metric: took 42.0034583s for node "addons-093168" to be "Ready" ...
	I0917 08:39:41.849924  397419 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 08:39:41.858669  397419 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7lhft" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:42.038738  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:42.251747  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:42.352912  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:42.353583  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:42.479530  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:42.750176  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:42.770265  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:42.770895  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:42.979804  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:43.251776  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:43.351669  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:43.352090  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:43.364736  397419 pod_ready.go:93] pod "coredns-7c65d6cfc9-7lhft" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.364757  397419 pod_ready.go:82] duration metric: took 1.50606765s for pod "coredns-7c65d6cfc9-7lhft" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.364777  397419 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.369471  397419 pod_ready.go:93] pod "etcd-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.369494  397419 pod_ready.go:82] duration metric: took 4.709608ms for pod "etcd-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.369508  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.373655  397419 pod_ready.go:93] pod "kube-apiserver-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.373672  397419 pod_ready.go:82] duration metric: took 4.156439ms for pod "kube-apiserver-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.373680  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.377527  397419 pod_ready.go:93] pod "kube-controller-manager-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.377561  397419 pod_ready.go:82] duration metric: took 3.873985ms for pod "kube-controller-manager-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.377572  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t77c5" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.450713  397419 pod_ready.go:93] pod "kube-proxy-t77c5" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.450741  397419 pod_ready.go:82] duration metric: took 73.161651ms for pod "kube-proxy-t77c5" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.450755  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.479047  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:43.750717  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:43.769660  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:43.769998  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:43.850947  397419 pod_ready.go:93] pod "kube-scheduler-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.850971  397419 pod_ready.go:82] duration metric: took 400.20789ms for pod "kube-scheduler-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.850982  397419 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.980093  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:44.250260  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:44.269521  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:44.270044  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:44.479161  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:44.750804  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:44.770420  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:44.770636  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.035777  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:45.250723  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:45.269748  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:45.270038  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.480689  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:45.750763  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:45.769885  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:45.770680  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.857292  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:45.980017  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:46.250727  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:46.269788  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:46.270046  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:46.539234  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:46.751501  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:46.835507  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:46.836067  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.036749  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:47.250892  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:47.336881  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:47.336877  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.536654  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:47.750566  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:47.770379  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:47.770654  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.857353  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:47.980545  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:48.251036  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:48.270119  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:48.270766  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:48.481111  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:48.751338  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:48.770188  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:48.771890  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:48.980058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:49.250249  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:49.270268  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:49.270358  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:49.480036  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:49.750762  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:49.770978  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:49.772174  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:49.857941  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:49.980041  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:50.250706  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:50.269862  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:50.270014  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:50.480731  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:50.751060  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:50.770120  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:50.770641  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.035548  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:51.250927  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:51.337208  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:51.337503  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.480679  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:51.750819  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:51.769976  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:51.770649  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.980192  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:52.250287  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:52.273280  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:52.353216  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:52.356559  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:52.479644  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:52.750695  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:52.769840  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:52.769992  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:52.980341  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:53.250812  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:53.269713  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:53.269993  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:53.479306  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:53.751203  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:53.769942  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:53.770231  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:53.982444  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:54.251381  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:54.270391  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:54.270907  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:54.357551  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:54.479329  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:54.750585  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:54.769800  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:54.770242  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:54.980330  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:55.250105  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:55.272058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:55.272343  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:55.480049  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:55.750228  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:55.769721  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:55.769811  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:55.979630  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:56.250644  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:56.270143  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:56.270801  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:56.361917  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:56.535770  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:56.750820  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:56.770318  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:56.834677  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:57.037436  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:57.251657  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:57.338559  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:57.340296  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:57.539728  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:57.750702  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:57.836323  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:57.836465  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.035687  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:58.250979  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:58.270445  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:58.270847  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.480099  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:58.750815  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:58.770260  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:58.770835  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.858855  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:58.980298  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:59.250242  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:59.271058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:59.271285  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:59.534742  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:59.749993  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:59.770735  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:59.770822  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:59.980421  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:00.250549  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:00.269795  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:00.270066  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:00.481133  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:00.750352  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:00.770060  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:00.770078  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:00.980516  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:01.250748  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:01.269906  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:01.270542  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:01.357167  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:01.479831  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:01.750735  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:01.851522  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:01.852196  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:01.980255  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:02.250668  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:02.270004  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:02.270239  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:02.480121  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:02.750937  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:02.770293  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:02.770548  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:02.980319  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:03.250471  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:03.269687  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:03.270015  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:03.358379  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:03.480308  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:03.750910  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:03.769915  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:03.770350  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:03.980888  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:04.250949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:04.334052  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:04.334547  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:04.536288  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:04.751331  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:04.769923  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:04.770074  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:04.979484  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:05.250753  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:05.269588  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:05.270367  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:05.479717  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:05.750044  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:05.770343  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:05.770697  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:05.857232  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:05.980252  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:06.250527  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:06.269894  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:06.270178  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:06.479711  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:06.750183  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:06.771071  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:06.771665  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:06.979659  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:07.251357  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:07.270510  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:07.270939  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:07.480189  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:07.750845  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:07.770209  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:07.771533  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:07.857980  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:07.983095  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:08.250342  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:08.270999  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:08.271094  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:08.479975  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:08.751137  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:08.770431  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:08.770712  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:08.980321  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:09.251024  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:09.270126  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:09.270735  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:09.480983  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:09.751277  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:09.769930  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:09.770147  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:09.980150  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:10.250493  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:10.269821  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:10.271102  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:10.356970  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:10.481755  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:10.749841  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:10.769711  397419 kapi.go:107] duration metric: took 1m7.503126792s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 08:40:10.770295  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:10.979832  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:11.250142  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:11.270431  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:11.480956  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:11.753496  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:11.770003  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:11.980475  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:12.250784  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:12.270813  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:12.357211  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:12.480873  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:12.751126  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:12.770604  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:12.979811  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:13.250139  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:13.270888  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:13.480241  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:13.750443  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:13.769994  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:13.979631  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:14.250829  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:14.270340  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:14.480298  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:14.750382  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:14.769880  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:14.857115  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:14.980593  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:15.250737  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:15.269909  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:15.480460  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:15.750879  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:15.770052  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:15.979744  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:16.251095  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:16.270338  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:16.480567  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:16.749687  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:16.770077  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.035489  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:17.250313  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:17.269943  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.356644  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:17.480054  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:17.750392  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:17.769702  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.980088  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:18.250474  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:18.269932  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:18.511698  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:18.750521  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:18.852675  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:18.979597  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:19.249859  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:19.270206  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:19.357692  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:19.480159  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:19.750104  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:19.771108  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:19.979504  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:20.251660  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:20.271175  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:20.480098  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:20.750670  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:20.770690  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:20.980839  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:21.250744  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:21.270685  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:21.357832  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:21.480348  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:21.750284  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:21.769821  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:21.981107  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:22.249898  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:22.270237  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:22.480433  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:22.750573  397419 kapi.go:107] duration metric: took 1m15.003789133s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 08:40:22.752532  397419 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-093168 cluster.
	I0917 08:40:22.753817  397419 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 08:40:22.755155  397419 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 08:40:22.769882  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:22.979715  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:23.270378  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:23.480884  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:23.770749  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:23.856903  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:23.979682  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:24.270418  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:24.481750  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:24.838546  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:24.979926  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:25.336387  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:25.536841  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:25.836400  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:25.857822  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:26.038227  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:26.270962  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:26.480310  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:26.769993  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:26.979717  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:27.270245  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:27.479626  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:27.770138  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:27.979728  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:28.270445  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:28.357521  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:28.479512  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:28.771302  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:28.980203  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:29.272777  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:29.479974  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:29.771290  397419 kapi.go:107] duration metric: took 1m26.505487302s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 08:40:30.036881  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:30.480783  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:30.856907  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:30.980652  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:31.480186  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:31.979880  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:32.481022  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:32.979408  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:33.357762  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:33.479779  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:33.979963  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:34.480525  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:34.980951  397419 kapi.go:107] duration metric: took 1m30.505737137s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 08:40:35.011214  397419 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, helm-tiller, nvidia-device-plugin, storage-provisioner, metrics-server, default-storageclass, inspektor-gadget, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0917 08:40:35.088827  397419 addons.go:510] duration metric: took 1m37.983731495s for enable addons: enabled=[cloud-spanner ingress-dns helm-tiller nvidia-device-plugin storage-provisioner metrics-server default-storageclass inspektor-gadget yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0917 08:40:35.963282  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:38.356952  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:40.357057  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:42.857137  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:45.357585  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:47.415219  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:49.856695  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:52.357369  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:54.856959  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:56.857573  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:59.356748  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:01.357311  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:03.857150  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:05.857298  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:08.356921  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:10.856637  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:12.857089  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:15.356886  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:17.357162  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:19.857088  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:21.857768  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:22.357225  397419 pod_ready.go:93] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"True"
	I0917 08:41:22.357248  397419 pod_ready.go:82] duration metric: took 1m38.50625923s for pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace to be "Ready" ...
	I0917 08:41:22.357261  397419 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fxm5v" in "kube-system" namespace to be "Ready" ...
	I0917 08:41:22.361581  397419 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fxm5v" in "kube-system" namespace has status "Ready":"True"
	I0917 08:41:22.361602  397419 pod_ready.go:82] duration metric: took 4.33393ms for pod "nvidia-device-plugin-daemonset-fxm5v" in "kube-system" namespace to be "Ready" ...
	I0917 08:41:22.361622  397419 pod_ready.go:39] duration metric: took 1m40.511686973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 08:41:22.361642  397419 api_server.go:52] waiting for apiserver process to appear ...
	I0917 08:41:22.361682  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 08:41:22.361731  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 08:41:22.396772  397419 cri.go:89] found id: "a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:22.396810  397419 cri.go:89] found id: ""
	I0917 08:41:22.396820  397419 logs.go:276] 1 containers: [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d]
	I0917 08:41:22.396885  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.401393  397419 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 08:41:22.401457  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 08:41:22.433869  397419 cri.go:89] found id: "498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:22.433890  397419 cri.go:89] found id: ""
	I0917 08:41:22.433898  397419 logs.go:276] 1 containers: [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126]
	I0917 08:41:22.433944  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.437332  397419 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 08:41:22.437407  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 08:41:22.472376  397419 cri.go:89] found id: "5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:22.472397  397419 cri.go:89] found id: ""
	I0917 08:41:22.472404  397419 logs.go:276] 1 containers: [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd]
	I0917 08:41:22.472448  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.475763  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 08:41:22.475824  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 08:41:22.509241  397419 cri.go:89] found id: "e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:22.509272  397419 cri.go:89] found id: ""
	I0917 08:41:22.509284  397419 logs.go:276] 1 containers: [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141]
	I0917 08:41:22.509335  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.512804  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 08:41:22.512865  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 08:41:22.546986  397419 cri.go:89] found id: "3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:22.547007  397419 cri.go:89] found id: ""
	I0917 08:41:22.547015  397419 logs.go:276] 1 containers: [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22]
	I0917 08:41:22.547060  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.550402  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 08:41:22.550459  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 08:41:22.584566  397419 cri.go:89] found id: "3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:22.584588  397419 cri.go:89] found id: ""
	I0917 08:41:22.584604  397419 logs.go:276] 1 containers: [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894]
	I0917 08:41:22.584655  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.588033  397419 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 08:41:22.588092  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 08:41:22.621636  397419 cri.go:89] found id: "c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:22.621662  397419 cri.go:89] found id: ""
	I0917 08:41:22.621672  397419 logs.go:276] 1 containers: [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7]
	I0917 08:41:22.621725  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.625177  397419 logs.go:123] Gathering logs for dmesg ...
	I0917 08:41:22.625207  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 08:41:22.651122  397419 logs.go:123] Gathering logs for describe nodes ...
	I0917 08:41:22.651158  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 08:41:22.750350  397419 logs.go:123] Gathering logs for kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] ...
	I0917 08:41:22.750382  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:22.794944  397419 logs.go:123] Gathering logs for etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] ...
	I0917 08:41:22.794981  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:22.847406  397419 logs.go:123] Gathering logs for kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] ...
	I0917 08:41:22.847443  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:22.882612  397419 logs.go:123] Gathering logs for kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] ...
	I0917 08:41:22.882647  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:22.938657  397419 logs.go:123] Gathering logs for container status ...
	I0917 08:41:22.938694  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 08:41:22.980301  397419 logs.go:123] Gathering logs for kubelet ...
	I0917 08:41:22.980332  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 08:41:23.057322  397419 logs.go:123] Gathering logs for coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] ...
	I0917 08:41:23.057359  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:23.092524  397419 logs.go:123] Gathering logs for kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] ...
	I0917 08:41:23.092557  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:23.129832  397419 logs.go:123] Gathering logs for kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] ...
	I0917 08:41:23.129871  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:23.165427  397419 logs.go:123] Gathering logs for CRI-O ...
	I0917 08:41:23.165458  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 08:41:25.744385  397419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 08:41:25.758404  397419 api_server.go:72] duration metric: took 2m28.653351209s to wait for apiserver process to appear ...
	I0917 08:41:25.758434  397419 api_server.go:88] waiting for apiserver healthz status ...
	I0917 08:41:25.758473  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 08:41:25.758517  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 08:41:25.791782  397419 cri.go:89] found id: "a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:25.791813  397419 cri.go:89] found id: ""
	I0917 08:41:25.791824  397419 logs.go:276] 1 containers: [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d]
	I0917 08:41:25.791876  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.795162  397419 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 08:41:25.795222  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 08:41:25.827605  397419 cri.go:89] found id: "498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:25.827632  397419 cri.go:89] found id: ""
	I0917 08:41:25.827642  397419 logs.go:276] 1 containers: [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126]
	I0917 08:41:25.827695  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.830956  397419 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 08:41:25.831016  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 08:41:25.864525  397419 cri.go:89] found id: "5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:25.864552  397419 cri.go:89] found id: ""
	I0917 08:41:25.864562  397419 logs.go:276] 1 containers: [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd]
	I0917 08:41:25.864628  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.867980  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 08:41:25.868042  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 08:41:25.901946  397419 cri.go:89] found id: "e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:25.901966  397419 cri.go:89] found id: ""
	I0917 08:41:25.901977  397419 logs.go:276] 1 containers: [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141]
	I0917 08:41:25.902026  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.905404  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 08:41:25.905458  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 08:41:25.938828  397419 cri.go:89] found id: "3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:25.938850  397419 cri.go:89] found id: ""
	I0917 08:41:25.938859  397419 logs.go:276] 1 containers: [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22]
	I0917 08:41:25.938905  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.942182  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 08:41:25.942243  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 08:41:25.975310  397419 cri.go:89] found id: "3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:25.975334  397419 cri.go:89] found id: ""
	I0917 08:41:25.975345  397419 logs.go:276] 1 containers: [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894]
	I0917 08:41:25.975405  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.978637  397419 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 08:41:25.978703  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 08:41:26.012169  397419 cri.go:89] found id: "c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:26.012190  397419 cri.go:89] found id: ""
	I0917 08:41:26.012200  397419 logs.go:276] 1 containers: [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7]
	I0917 08:41:26.012256  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:26.015540  397419 logs.go:123] Gathering logs for kubelet ...
	I0917 08:41:26.015562  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 08:41:26.093016  397419 logs.go:123] Gathering logs for kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] ...
	I0917 08:41:26.093054  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:26.136808  397419 logs.go:123] Gathering logs for etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] ...
	I0917 08:41:26.136847  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:26.188782  397419 logs.go:123] Gathering logs for kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] ...
	I0917 08:41:26.188814  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:26.226705  397419 logs.go:123] Gathering logs for kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] ...
	I0917 08:41:26.226736  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:26.259580  397419 logs.go:123] Gathering logs for CRI-O ...
	I0917 08:41:26.259609  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 08:41:26.335847  397419 logs.go:123] Gathering logs for container status ...
	I0917 08:41:26.335885  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 08:41:26.378206  397419 logs.go:123] Gathering logs for dmesg ...
	I0917 08:41:26.378237  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 08:41:26.404518  397419 logs.go:123] Gathering logs for describe nodes ...
	I0917 08:41:26.404550  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 08:41:26.508227  397419 logs.go:123] Gathering logs for coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] ...
	I0917 08:41:26.508263  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:26.543742  397419 logs.go:123] Gathering logs for kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] ...
	I0917 08:41:26.543777  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:26.600899  397419 logs.go:123] Gathering logs for kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] ...
	I0917 08:41:26.600938  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:29.138040  397419 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 08:41:29.142631  397419 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 08:41:29.143571  397419 api_server.go:141] control plane version: v1.31.1
	I0917 08:41:29.143606  397419 api_server.go:131] duration metric: took 3.385163598s to wait for apiserver health ...
	I0917 08:41:29.143621  397419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 08:41:29.143650  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 08:41:29.143699  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 08:41:29.178086  397419 cri.go:89] found id: "a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:29.178111  397419 cri.go:89] found id: ""
	I0917 08:41:29.178121  397419 logs.go:276] 1 containers: [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d]
	I0917 08:41:29.178180  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.181712  397419 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 08:41:29.181779  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 08:41:29.215733  397419 cri.go:89] found id: "498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:29.215755  397419 cri.go:89] found id: ""
	I0917 08:41:29.215763  397419 logs.go:276] 1 containers: [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126]
	I0917 08:41:29.215809  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.219058  397419 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 08:41:29.219111  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 08:41:29.252251  397419 cri.go:89] found id: "5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:29.252272  397419 cri.go:89] found id: ""
	I0917 08:41:29.252279  397419 logs.go:276] 1 containers: [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd]
	I0917 08:41:29.252321  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.255633  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 08:41:29.255688  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 08:41:29.289333  397419 cri.go:89] found id: "e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:29.289359  397419 cri.go:89] found id: ""
	I0917 08:41:29.289369  397419 logs.go:276] 1 containers: [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141]
	I0917 08:41:29.289423  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.292943  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 08:41:29.292996  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 08:41:29.326709  397419 cri.go:89] found id: "3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:29.326731  397419 cri.go:89] found id: ""
	I0917 08:41:29.326739  397419 logs.go:276] 1 containers: [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22]
	I0917 08:41:29.326799  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.330170  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 08:41:29.330226  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 08:41:29.363477  397419 cri.go:89] found id: "3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:29.363501  397419 cri.go:89] found id: ""
	I0917 08:41:29.363511  397419 logs.go:276] 1 containers: [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894]
	I0917 08:41:29.363567  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.366804  397419 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 08:41:29.366860  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 08:41:29.399852  397419 cri.go:89] found id: "c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:29.399872  397419 cri.go:89] found id: ""
	I0917 08:41:29.399881  397419 logs.go:276] 1 containers: [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7]
	I0917 08:41:29.399934  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.403233  397419 logs.go:123] Gathering logs for etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] ...
	I0917 08:41:29.403253  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:29.451453  397419 logs.go:123] Gathering logs for kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] ...
	I0917 08:41:29.451484  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:29.488951  397419 logs.go:123] Gathering logs for kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] ...
	I0917 08:41:29.488979  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:29.523572  397419 logs.go:123] Gathering logs for kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] ...
	I0917 08:41:29.523603  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:29.579709  397419 logs.go:123] Gathering logs for CRI-O ...
	I0917 08:41:29.579750  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 08:41:29.658415  397419 logs.go:123] Gathering logs for kubelet ...
	I0917 08:41:29.658455  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 08:41:29.735441  397419 logs.go:123] Gathering logs for dmesg ...
	I0917 08:41:29.735481  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 08:41:29.762124  397419 logs.go:123] Gathering logs for describe nodes ...
	I0917 08:41:29.762159  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 08:41:29.856247  397419 logs.go:123] Gathering logs for kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] ...
	I0917 08:41:29.856278  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:29.902365  397419 logs.go:123] Gathering logs for coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] ...
	I0917 08:41:29.902398  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:29.938050  397419 logs.go:123] Gathering logs for kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] ...
	I0917 08:41:29.938081  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:29.973223  397419 logs.go:123] Gathering logs for container status ...
	I0917 08:41:29.973251  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 08:41:32.526366  397419 system_pods.go:59] 19 kube-system pods found
	I0917 08:41:32.526399  397419 system_pods.go:61] "coredns-7c65d6cfc9-7lhft" [d955ab8f-33f3-4177-a7cf-29b7b9cc1102] Running
	I0917 08:41:32.526405  397419 system_pods.go:61] "csi-hostpath-attacher-0" [74cbb098-f189-44df-a4b9-3d4644fad690] Running
	I0917 08:41:32.526409  397419 system_pods.go:61] "csi-hostpath-resizer-0" [2d53c081-d93a-46a4-8b7b-29e15b9b485e] Running
	I0917 08:41:32.526413  397419 system_pods.go:61] "csi-hostpathplugin-lknd7" [3267ecfa-6ae5-4291-9944-574c0476e9ec] Running
	I0917 08:41:32.526416  397419 system_pods.go:61] "etcd-addons-093168" [a017480c-3ca0-477f-801b-630887a3efdd] Running
	I0917 08:41:32.526420  397419 system_pods.go:61] "kindnet-nvhtv" [2a27ef1d-01b4-4db6-9b83-51a2b2889bc2] Running
	I0917 08:41:32.526422  397419 system_pods.go:61] "kube-apiserver-addons-093168" [1b03826d-3f50-4a0c-a2ad-f8d354f0935a] Running
	I0917 08:41:32.526425  397419 system_pods.go:61] "kube-controller-manager-addons-093168" [2da0a6e2-49be-44c3-a463-463a9865310f] Running
	I0917 08:41:32.526428  397419 system_pods.go:61] "kube-ingress-dns-minikube" [236b5470-912c-4665-ae2a-0aeda61e0892] Running
	I0917 08:41:32.526432  397419 system_pods.go:61] "kube-proxy-t77c5" [76518769-e724-461e-8134-d120144d60a8] Running
	I0917 08:41:32.526436  397419 system_pods.go:61] "kube-scheduler-addons-093168" [8dbe178e-95a4-491e-a059-423f6b78f417] Running
	I0917 08:41:32.526441  397419 system_pods.go:61] "metrics-server-84c5f94fbc-bmr95" [48e9bb6a-e161-4bfe-a8e4-14f5b970e50c] Running
	I0917 08:41:32.526445  397419 system_pods.go:61] "nvidia-device-plugin-daemonset-fxm5v" [d00acbad-2301-4783-835a-f6133e77a22b] Running
	I0917 08:41:32.526450  397419 system_pods.go:61] "registry-66c9cd494c-8h9wm" [efc2db30-2af8-4cf7-a316-5dac4df4a136] Running
	I0917 08:41:32.526455  397419 system_pods.go:61] "registry-proxy-9plz8" [8bc41646-54c5-4d13-8d5f-bebcdc6f15ce] Running
	I0917 08:41:32.526461  397419 system_pods.go:61] "snapshot-controller-56fcc65765-md5h6" [ff141ee6-2569-49b0-8b1a-83d9a1a05178] Running
	I0917 08:41:32.526470  397419 system_pods.go:61] "snapshot-controller-56fcc65765-xdr22" [69737144-ad79-4db9-ae9c-e5575f580f48] Running
	I0917 08:41:32.526475  397419 system_pods.go:61] "storage-provisioner" [e20caa93-3db5-4d96-b8a8-7665d4f5437d] Running
	I0917 08:41:32.526483  397419 system_pods.go:61] "tiller-deploy-b48cc5f79-p6zds" [48ba15f8-54f5-410f-8c46-b15665532417] Running
	I0917 08:41:32.526493  397419 system_pods.go:74] duration metric: took 3.382863956s to wait for pod list to return data ...
	I0917 08:41:32.526503  397419 default_sa.go:34] waiting for default service account to be created ...
	I0917 08:41:32.529073  397419 default_sa.go:45] found service account: "default"
	I0917 08:41:32.529100  397419 default_sa.go:55] duration metric: took 2.584342ms for default service account to be created ...
	I0917 08:41:32.529110  397419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 08:41:32.539148  397419 system_pods.go:86] 19 kube-system pods found
	I0917 08:41:32.539179  397419 system_pods.go:89] "coredns-7c65d6cfc9-7lhft" [d955ab8f-33f3-4177-a7cf-29b7b9cc1102] Running
	I0917 08:41:32.539185  397419 system_pods.go:89] "csi-hostpath-attacher-0" [74cbb098-f189-44df-a4b9-3d4644fad690] Running
	I0917 08:41:32.539189  397419 system_pods.go:89] "csi-hostpath-resizer-0" [2d53c081-d93a-46a4-8b7b-29e15b9b485e] Running
	I0917 08:41:32.539193  397419 system_pods.go:89] "csi-hostpathplugin-lknd7" [3267ecfa-6ae5-4291-9944-574c0476e9ec] Running
	I0917 08:41:32.539196  397419 system_pods.go:89] "etcd-addons-093168" [a017480c-3ca0-477f-801b-630887a3efdd] Running
	I0917 08:41:32.539200  397419 system_pods.go:89] "kindnet-nvhtv" [2a27ef1d-01b4-4db6-9b83-51a2b2889bc2] Running
	I0917 08:41:32.539203  397419 system_pods.go:89] "kube-apiserver-addons-093168" [1b03826d-3f50-4a0c-a2ad-f8d354f0935a] Running
	I0917 08:41:32.539207  397419 system_pods.go:89] "kube-controller-manager-addons-093168" [2da0a6e2-49be-44c3-a463-463a9865310f] Running
	I0917 08:41:32.539210  397419 system_pods.go:89] "kube-ingress-dns-minikube" [236b5470-912c-4665-ae2a-0aeda61e0892] Running
	I0917 08:41:32.539213  397419 system_pods.go:89] "kube-proxy-t77c5" [76518769-e724-461e-8134-d120144d60a8] Running
	I0917 08:41:32.539216  397419 system_pods.go:89] "kube-scheduler-addons-093168" [8dbe178e-95a4-491e-a059-423f6b78f417] Running
	I0917 08:41:32.539220  397419 system_pods.go:89] "metrics-server-84c5f94fbc-bmr95" [48e9bb6a-e161-4bfe-a8e4-14f5b970e50c] Running
	I0917 08:41:32.539223  397419 system_pods.go:89] "nvidia-device-plugin-daemonset-fxm5v" [d00acbad-2301-4783-835a-f6133e77a22b] Running
	I0917 08:41:32.539227  397419 system_pods.go:89] "registry-66c9cd494c-8h9wm" [efc2db30-2af8-4cf7-a316-5dac4df4a136] Running
	I0917 08:41:32.539230  397419 system_pods.go:89] "registry-proxy-9plz8" [8bc41646-54c5-4d13-8d5f-bebcdc6f15ce] Running
	I0917 08:41:32.539235  397419 system_pods.go:89] "snapshot-controller-56fcc65765-md5h6" [ff141ee6-2569-49b0-8b1a-83d9a1a05178] Running
	I0917 08:41:32.539242  397419 system_pods.go:89] "snapshot-controller-56fcc65765-xdr22" [69737144-ad79-4db9-ae9c-e5575f580f48] Running
	I0917 08:41:32.539245  397419 system_pods.go:89] "storage-provisioner" [e20caa93-3db5-4d96-b8a8-7665d4f5437d] Running
	I0917 08:41:32.539248  397419 system_pods.go:89] "tiller-deploy-b48cc5f79-p6zds" [48ba15f8-54f5-410f-8c46-b15665532417] Running
	I0917 08:41:32.539255  397419 system_pods.go:126] duration metric: took 10.139894ms to wait for k8s-apps to be running ...
	I0917 08:41:32.539265  397419 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 08:41:32.539310  397419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 08:41:32.550663  397419 system_svc.go:56] duration metric: took 11.387952ms WaitForService to wait for kubelet
	I0917 08:41:32.550703  397419 kubeadm.go:582] duration metric: took 2m35.445654974s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 08:41:32.550732  397419 node_conditions.go:102] verifying NodePressure condition ...
	I0917 08:41:32.553809  397419 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 08:41:32.553834  397419 node_conditions.go:123] node cpu capacity is 8
	I0917 08:41:32.553851  397419 node_conditions.go:105] duration metric: took 3.112867ms to run NodePressure ...
	I0917 08:41:32.553869  397419 start.go:241] waiting for startup goroutines ...
	I0917 08:41:32.553875  397419 start.go:246] waiting for cluster config update ...
	I0917 08:41:32.553893  397419 start.go:255] writing updated cluster config ...
	I0917 08:41:32.554149  397419 ssh_runner.go:195] Run: rm -f paused
	I0917 08:41:32.604339  397419 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 08:41:32.606540  397419 out.go:177] * Done! kubectl is now configured to use "addons-093168" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 08:54:33 addons-093168 crio[1031]: time="2024-09-17 08:54:33.935751122Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1c94b259-4511-4012-b9c5-0eece6850aec name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:54:33 addons-093168 crio[1031]: time="2024-09-17 08:54:33.936057451Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1c94b259-4511-4012-b9c5-0eece6850aec name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:54:36 addons-093168 crio[1031]: time="2024-09-17 08:54:36.861977589Z" level=info msg="Pulling image: busybox:stable" id=7e3b6c8e-e91c-4704-9e60-a50cec63d88b name=/runtime.v1.ImageService/PullImage
	Sep 17 08:54:36 addons-093168 crio[1031]: time="2024-09-17 08:54:36.862176613Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Sep 17 08:54:36 addons-093168 crio[1031]: time="2024-09-17 08:54:36.879873452Z" level=info msg="Trying to access \"docker.io/library/busybox:stable\""
	Sep 17 08:54:46 addons-093168 crio[1031]: time="2024-09-17 08:54:46.935556510Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b40687c1-7688-428f-ab40-4d1a603a5704 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:54:46 addons-093168 crio[1031]: time="2024-09-17 08:54:46.935789351Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b40687c1-7688-428f-ab40-4d1a603a5704 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:54:51 addons-093168 crio[1031]: time="2024-09-17 08:54:51.936978403Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=defaedc2-8ab7-43b8-a99b-8f3059a5930d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:54:51 addons-093168 crio[1031]: time="2024-09-17 08:54:51.937182695Z" level=info msg="Image docker.io/nginx:alpine not found" id=defaedc2-8ab7-43b8-a99b-8f3059a5930d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:54:58 addons-093168 crio[1031]: time="2024-09-17 08:54:58.936237700Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=31c0f2f3-5b81-4400-8afb-b37932f75bae name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:54:58 addons-093168 crio[1031]: time="2024-09-17 08:54:58.936491383Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=31c0f2f3-5b81-4400-8afb-b37932f75bae name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:05 addons-093168 crio[1031]: time="2024-09-17 08:55:05.936390637Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=8c84ee82-1f6f-4cea-89e3-2684351890cb name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:05 addons-093168 crio[1031]: time="2024-09-17 08:55:05.936689520Z" level=info msg="Image docker.io/nginx:alpine not found" id=8c84ee82-1f6f-4cea-89e3-2684351890cb name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:12 addons-093168 crio[1031]: time="2024-09-17 08:55:12.936218237Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bfd5da43-d9db-4c34-8def-17d7d266a9ca name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:12 addons-093168 crio[1031]: time="2024-09-17 08:55:12.936516283Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=bfd5da43-d9db-4c34-8def-17d7d266a9ca name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:13 addons-093168 crio[1031]: time="2024-09-17 08:55:13.991142327Z" level=info msg="Pulling image: docker.io/nginx:latest" id=6f2f0d5a-416e-4bd0-a3ce-26e96196cd43 name=/runtime.v1.ImageService/PullImage
	Sep 17 08:55:13 addons-093168 crio[1031]: time="2024-09-17 08:55:13.992601285Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 17 08:55:20 addons-093168 crio[1031]: time="2024-09-17 08:55:20.935783109Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=dcd014bb-59c5-4461-b513-2ea5fcf678f0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:20 addons-093168 crio[1031]: time="2024-09-17 08:55:20.936054916Z" level=info msg="Image docker.io/nginx:alpine not found" id=dcd014bb-59c5-4461-b513-2ea5fcf678f0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:25 addons-093168 crio[1031]: time="2024-09-17 08:55:25.935824256Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bef2d257-adc1-44b9-a91f-f2d976b09cf7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:25 addons-093168 crio[1031]: time="2024-09-17 08:55:25.936047073Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=bef2d257-adc1-44b9-a91f-f2d976b09cf7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:26 addons-093168 crio[1031]: time="2024-09-17 08:55:26.936276907Z" level=info msg="Checking image status: busybox:stable" id=e0255049-8332-435f-9033-5c363073b7f9 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:26 addons-093168 crio[1031]: time="2024-09-17 08:55:26.936490604Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Sep 17 08:55:26 addons-093168 crio[1031]: time="2024-09-17 08:55:26.936676121Z" level=info msg="Image busybox:stable not found" id=e0255049-8332-435f-9033-5c363073b7f9 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:35 addons-093168 crio[1031]: time="2024-09-17 08:55:35.171561670Z" level=info msg="Stopping container: 2999275b8545cb45b2cd31bad3055899097f827e984f8043dd0ee27009fbce00 (timeout: 30s)" id=0da62d4e-50bc-45b8-b123-ca8f2f6c0299 name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	0906bd347c6d5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          15 minutes ago      Running             csi-snapshotter                          0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	f64b5aebbe7dd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 minutes ago      Running             csi-provisioner                          0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	eba5434cab6ab       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            15 minutes ago      Running             liveness-probe                           0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	057ac2c02266d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           15 minutes ago      Running             hostpath                                 0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	a0cca87be1a6f       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             15 minutes ago      Running             controller                               0                   2ba51e0898663       ingress-nginx-controller-bc57996ff-vgw4z
	db9ecacd5aed6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                15 minutes ago      Running             node-driver-registrar                    0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	843e30f0a0cf8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 15 minutes ago      Running             gcp-auth                                 0                   2e75c3dc5c24b       gcp-auth-89d5ffd79-xhlm6
	2999275b8545c       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        15 minutes ago      Exited              metrics-server                           0                   7951cf53f3ce5       metrics-server-84c5f94fbc-bmr95
	a53dfdb3b91a2       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              15 minutes ago      Running             csi-resizer                              0                   e4b2df5e4c60c       csi-hostpath-resizer-0
	a31591d3a75de       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             15 minutes ago      Running             local-path-provisioner                   0                   655c3c112fdda       local-path-provisioner-86d989889c-qkqjp
	221d8f80ce839       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             15 minutes ago      Running             csi-attacher                             0                   47552b94b1444       csi-hostpath-attacher-0
	12e5d8714fa59       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   15 minutes ago      Exited              patch                                    0                   4d5a9d109a211       ingress-nginx-admission-patch-pzmkp
	f921ee5175ec0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   15 minutes ago      Running             csi-external-health-monitor-controller   0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	a54dcb4e0840a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   15 minutes ago      Exited              create                                   0                   fc238c2462bf5       ingress-nginx-admission-create-4qdns
	b1aa0b4e6a00c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      15 minutes ago      Running             volume-snapshot-controller               0                   47f5d8b226a2a       snapshot-controller-56fcc65765-xdr22
	85332a0e5866e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      15 minutes ago      Running             volume-snapshot-controller               0                   f61171da5bfb1       snapshot-controller-56fcc65765-md5h6
	3300f395d8567       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             15 minutes ago      Running             minikube-ingress-dns                     0                   f7a1428432f34       kube-ingress-dns-minikube
	5eddba40afd11       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             15 minutes ago      Running             coredns                                  0                   ebe1938207849       coredns-7c65d6cfc9-7lhft
	6d7dbaef7a5cd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             15 minutes ago      Running             storage-provisioner                      0                   c9466fe8d518b       storage-provisioner
	3a8b894037793       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             16 minutes ago      Running             kube-proxy                               0                   eb334b9a5799a       kube-proxy-t77c5
	c9fa6b2ef5f0b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                                             16 minutes ago      Running             kindnet-cni                              0                   2e76c07fa96a5       kindnet-nvhtv
	e817293c644c7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             16 minutes ago      Running             kube-scheduler                           0                   a4765fe76b73a       kube-scheduler-addons-093168
	3521aa957963e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             16 minutes ago      Running             kube-controller-manager                  0                   2608552715e00       kube-controller-manager-addons-093168
	498509ee96967       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             16 minutes ago      Running             etcd                                     0                   62ce9ab109c53       etcd-addons-093168
	a2e61e738c0da       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             16 minutes ago      Running             kube-apiserver                           0                   bceb5d8367d07       kube-apiserver-addons-093168
	
	
	==> coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] <==
	[INFO] 10.244.0.11:33082 - 25853 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001192s
	[INFO] 10.244.0.11:37329 - 15527 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075609s
	[INFO] 10.244.0.11:37329 - 17316 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000121561s
	[INFO] 10.244.0.11:60250 - 35649 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005099659s
	[INFO] 10.244.0.11:60250 - 60739 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.006207516s
	[INFO] 10.244.0.11:37419 - 41998 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006428119s
	[INFO] 10.244.0.11:37419 - 39435 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006489964s
	[INFO] 10.244.0.11:56965 - 22146 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005110836s
	[INFO] 10.244.0.11:56965 - 41870 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005774722s
	[INFO] 10.244.0.11:40932 - 6018 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000055144s
	[INFO] 10.244.0.11:40932 - 2693 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093554s
	[INFO] 10.244.0.20:60603 - 21372 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000239521s
	[INFO] 10.244.0.20:56296 - 33744 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000369472s
	[INFO] 10.244.0.20:40076 - 30284 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123756s
	[INFO] 10.244.0.20:49639 - 52270 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158323s
	[INFO] 10.244.0.20:40994 - 1923 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099192s
	[INFO] 10.244.0.20:37435 - 32231 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000168193s
	[INFO] 10.244.0.20:36201 - 45290 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008885924s
	[INFO] 10.244.0.20:59898 - 55008 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008870022s
	[INFO] 10.244.0.20:43991 - 39302 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007846244s
	[INFO] 10.244.0.20:58304 - 34077 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008338334s
	[INFO] 10.244.0.20:34428 - 29339 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006763856s
	[INFO] 10.244.0.20:47732 - 9825 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007153268s
	[INFO] 10.244.0.20:52184 - 47443 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000802704s
	[INFO] 10.244.0.20:41521 - 18294 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000879797s
	
	
	==> describe nodes <==
	Name:               addons-093168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-093168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=addons-093168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T08_38_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-093168
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-093168"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 08:38:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-093168
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 08:55:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 08:55:32 +0000   Tue, 17 Sep 2024 08:38:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 08:55:32 +0000   Tue, 17 Sep 2024 08:38:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 08:55:32 +0000   Tue, 17 Sep 2024 08:38:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 08:55:32 +0000   Tue, 17 Sep 2024 08:39:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-093168
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fdb73868874fa2aa4322a27fc496be
	  System UUID:                7036efa9-bcf4-469e-8312-994f69eacc62
	  Boot ID:                    8c59a26b-5d0c-4753-9e88-ef03399e569b
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  default                     task-pv-pod-restore                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  default                     test-local-path                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  gcp-auth                    gcp-auth-89d5ffd79-xhlm6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vgw4z    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         16m
	  kube-system                 coredns-7c65d6cfc9-7lhft                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 csi-hostpathplugin-lknd7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 etcd-addons-093168                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-nvhtv                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-addons-093168                250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-093168       200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-t77c5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-093168                100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 snapshot-controller-56fcc65765-md5h6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 snapshot-controller-56fcc65765-xdr22        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  local-path-storage          local-path-provisioner-86d989889c-qkqjp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 16m   kube-proxy       
	  Normal   Starting                 16m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  16m   kubelet          Node addons-093168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m   kubelet          Node addons-093168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m   kubelet          Node addons-093168 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m   node-controller  Node addons-093168 event: Registered Node addons-093168 in Controller
	  Normal   NodeReady                15m   kubelet          Node addons-093168 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba ff 74 a1 5e 3b 08 06
	[ +13.302976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 08 54 46 b8 ba 08 06
	[  +0.000352] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ba ff 74 a1 5e 3b 08 06
	[Sep17 08:24] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a 24 b9 ac 9a ab 08 06
	[  +0.000405] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a b6 29 69 41 ca 08 06
	[ +18.455196] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 92 00 b0 ac cb 10 08 06
	[  +0.102770] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 8d 84 a2 25 2e 08 06
	[ +10.887970] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev cni0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[  +0.094820] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[Sep17 08:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 14 a2 f8 f7 06 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[ +21.407596] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 7a 9f 11 c8 01 08 06
	[  +0.000366] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 22 8d 84 a2 25 2e 08 06
	
	
	==> etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] <==
	{"level":"info","ts":"2024-09-17T08:39:00.944484Z","caller":"traceutil/trace.go:171","msg":"trace[1814892135] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:389; }","duration":"195.704417ms","start":"2024-09-17T08:39:00.748773Z","end":"2024-09-17T08:39:00.944477Z","steps":["trace[1814892135] 'agreement among raft nodes before linearized reading'  (duration: 193.700916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:00.942519Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.799596ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-17T08:39:00.944656Z","caller":"traceutil/trace.go:171","msg":"trace[1494037761] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:389; }","duration":"195.932813ms","start":"2024-09-17T08:39:00.748716Z","end":"2024-09-17T08:39:00.944649Z","steps":["trace[1494037761] 'agreement among raft nodes before linearized reading'  (duration: 193.78917ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.236883Z","caller":"traceutil/trace.go:171","msg":"trace[1393862041] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"189.03103ms","start":"2024-09-17T08:39:01.047836Z","end":"2024-09-17T08:39:01.236868Z","steps":["trace[1393862041] 'process raft request'  (duration: 84.371141ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246393Z","caller":"traceutil/trace.go:171","msg":"trace[350871136] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"192.090658ms","start":"2024-09-17T08:39:01.054286Z","end":"2024-09-17T08:39:01.246377Z","steps":["trace[350871136] 'process raft request'  (duration: 192.056665ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246556Z","caller":"traceutil/trace.go:171","msg":"trace[288716589] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"192.561769ms","start":"2024-09-17T08:39:01.053978Z","end":"2024-09-17T08:39:01.246540Z","steps":["trace[288716589] 'process raft request'  (duration: 192.289701ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246589Z","caller":"traceutil/trace.go:171","msg":"trace[842047613] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"194.309372ms","start":"2024-09-17T08:39:01.052273Z","end":"2024-09-17T08:39:01.246583Z","steps":["trace[842047613] 'process raft request'  (duration: 193.860025ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246756Z","caller":"traceutil/trace.go:171","msg":"trace[874038599] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"192.611349ms","start":"2024-09-17T08:39:01.054136Z","end":"2024-09-17T08:39:01.246747Z","steps":["trace[874038599] 'process raft request'  (duration: 192.166716ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246789Z","caller":"traceutil/trace.go:171","msg":"trace[832402900] linearizableReadLoop","detail":"{readStateIndex:412; appliedIndex:412; }","duration":"107.196849ms","start":"2024-09-17T08:39:01.139584Z","end":"2024-09-17T08:39:01.246781Z","steps":["trace[832402900] 'read index received'  (duration: 107.193495ms)","trace[832402900] 'applied index is now lower than readState.Index'  (duration: 2.936µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T08:39:01.246842Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.242882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:01.247903Z","caller":"traceutil/trace.go:171","msg":"trace[1595279853] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:401; }","duration":"108.044342ms","start":"2024-09-17T08:39:01.139580Z","end":"2024-09-17T08:39:01.247624Z","steps":["trace[1595279853] 'agreement among raft nodes before linearized reading'  (duration: 107.221566ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:01.249317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.530022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:01.250846Z","caller":"traceutil/trace.go:171","msg":"trace[1335273238] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:407; }","duration":"111.069492ms","start":"2024-09-17T08:39:01.139765Z","end":"2024-09-17T08:39:01.250834Z","steps":["trace[1335273238] 'agreement among raft nodes before linearized reading'  (duration: 109.456626ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.249635Z","caller":"traceutil/trace.go:171","msg":"trace[134367931] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"109.798885ms","start":"2024-09-17T08:39:01.139825Z","end":"2024-09-17T08:39:01.249624Z","steps":["trace[134367931] 'process raft request'  (duration: 109.176303ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:01.250797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.892038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:01.251932Z","caller":"traceutil/trace.go:171","msg":"trace[1048075780] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:407; }","duration":"112.027319ms","start":"2024-09-17T08:39:01.139891Z","end":"2024-09-17T08:39:01.251919Z","steps":["trace[1048075780] 'agreement among raft nodes before linearized reading'  (duration: 110.877975ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:40:35.768719Z","caller":"traceutil/trace.go:171","msg":"trace[33144781] transaction","detail":"{read_only:false; response_revision:1201; number_of_response:1; }","duration":"100.543757ms","start":"2024-09-17T08:40:35.668147Z","end":"2024-09-17T08:40:35.768691Z","steps":["trace[33144781] 'process raft request'  (duration: 100.303667ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:40:35.958931Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.840736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-bmr95\" ","response":"range_response_count:1 size:4865"}
	{"level":"info","ts":"2024-09-17T08:40:35.958981Z","caller":"traceutil/trace.go:171","msg":"trace[13582332] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-bmr95; range_end:; response_count:1; response_revision:1201; }","duration":"105.907905ms","start":"2024-09-17T08:40:35.853062Z","end":"2024-09-17T08:40:35.958970Z","steps":["trace[13582332] 'range keys from in-memory index tree'  (duration: 105.71294ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:48:48.277449Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1537}
	{"level":"info","ts":"2024-09-17T08:48:48.301907Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1537,"took":"23.976999ms","hash":2118524458,"current-db-size-bytes":6434816,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3305472,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-17T08:48:48.301956Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2118524458,"revision":1537,"compact-revision":-1}
	{"level":"info","ts":"2024-09-17T08:53:48.282008Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1957}
	{"level":"info","ts":"2024-09-17T08:53:48.297895Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1957,"took":"15.390014ms","hash":1935728090,"current-db-size-bytes":6434816,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3989504,"current-db-size-in-use":"4.0 MB"}
	{"level":"info","ts":"2024-09-17T08:53:48.297952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1935728090,"revision":1957,"compact-revision":1537}
	
	
	==> gcp-auth [843e30f0a0cf860efc230a2a87deca3cc75d4f6408e31a84a0dd5b01df4dc08d] <==
	2024/09/17 08:41:32 Ready to write response ...
	2024/09/17 08:41:32 Ready to marshal response ...
	2024/09/17 08:41:32 Ready to write response ...
	2024/09/17 08:49:36 Ready to marshal response ...
	2024/09/17 08:49:36 Ready to write response ...
	2024/09/17 08:49:36 Ready to marshal response ...
	2024/09/17 08:49:36 Ready to write response ...
	2024/09/17 08:49:36 Ready to marshal response ...
	2024/09/17 08:49:36 Ready to write response ...
	2024/09/17 08:49:45 Ready to marshal response ...
	2024/09/17 08:49:45 Ready to write response ...
	2024/09/17 08:49:46 Ready to marshal response ...
	2024/09/17 08:49:46 Ready to write response ...
	2024/09/17 08:49:51 Ready to marshal response ...
	2024/09/17 08:49:51 Ready to write response ...
	2024/09/17 08:49:52 Ready to marshal response ...
	2024/09/17 08:49:52 Ready to write response ...
	2024/09/17 08:49:52 Ready to marshal response ...
	2024/09/17 08:49:52 Ready to write response ...
	2024/09/17 08:49:53 Ready to marshal response ...
	2024/09/17 08:49:53 Ready to write response ...
	2024/09/17 08:49:54 Ready to marshal response ...
	2024/09/17 08:49:54 Ready to write response ...
	2024/09/17 08:50:25 Ready to marshal response ...
	2024/09/17 08:50:25 Ready to write response ...
	
	
	==> kernel <==
	 08:55:36 up  2:38,  0 users,  load average: 0.02, 0.16, 0.53
	Linux addons-093168 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] <==
	I0917 08:53:31.149290       1 main.go:299] handling current node
	I0917 08:53:41.156014       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:53:41.156047       1 main.go:299] handling current node
	I0917 08:53:51.152046       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:53:51.152082       1 main.go:299] handling current node
	I0917 08:54:01.149091       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:54:01.149131       1 main.go:299] handling current node
	I0917 08:54:11.149731       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:54:11.149776       1 main.go:299] handling current node
	I0917 08:54:21.148912       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:54:21.148963       1 main.go:299] handling current node
	I0917 08:54:31.149251       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:54:31.149283       1 main.go:299] handling current node
	I0917 08:54:41.155277       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:54:41.155313       1 main.go:299] handling current node
	I0917 08:54:51.148947       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:54:51.148992       1 main.go:299] handling current node
	I0917 08:55:01.149259       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:55:01.149298       1 main.go:299] handling current node
	I0917 08:55:11.149704       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:55:11.149737       1 main.go:299] handling current node
	I0917 08:55:21.152017       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:55:21.152059       1 main.go:299] handling current node
	I0917 08:55:31.150239       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:55:31.150273       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] <==
	W0917 08:41:23.031581       1 handler_proxy.go:99] no RequestInfo found in the context
	W0917 08:41:23.031606       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 08:41:23.031645       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0917 08:41:23.031691       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 08:41:23.032764       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 08:41:23.032787       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0917 08:41:27.038506       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.221.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.221.184:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W0917 08:41:27.038723       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 08:41:27.039088       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 08:41:27.049456       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0917 08:49:36.125694       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.199.141"}
	E0917 08:49:48.897202       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40012: use of closed network connection
	E0917 08:49:48.922992       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.25:41648: read: connection reset by peer
	E0917 08:49:53.964352       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0917 08:49:54.758375       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0917 08:49:54.934461       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.248.144"}
	I0917 08:50:05.538716       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0917 08:53:08.601116       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0917 08:53:09.617791       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] <==
	I0917 08:50:25.617335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-093168"
	I0917 08:50:46.196796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.288µs"
	I0917 08:53:03.110800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="8.873µs"
	E0917 08:53:09.619260       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:53:11.124524       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:53:11.124582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:53:13.274186       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:53:13.274230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:53:18.716391       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0917 08:53:19.260347       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:53:19.260394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:53:26.564141       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0917 08:53:26.564179       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 08:53:26.970602       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0917 08:53:26.970647       1 shared_informer.go:320] Caches are synced for garbage collector
	W0917 08:53:31.159305       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:53:31.159360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:53:52.995298       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:53:52.995348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:54:39.470027       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:54:39.470076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:55:18.328582       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:55:18.328633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:55:32.767462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-093168"
	I0917 08:55:35.162554       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="7.818µs"
	
	
	==> kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] <==
	I0917 08:39:00.642627       1 server_linux.go:66] "Using iptables proxy"
	I0917 08:39:01.648049       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 08:39:01.648220       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 08:39:02.034353       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 08:39:02.034507       1 server_linux.go:169] "Using iptables Proxier"
	I0917 08:39:02.043649       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 08:39:02.044366       1 server.go:483] "Version info" version="v1.31.1"
	I0917 08:39:02.044467       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 08:39:02.047306       1 config.go:199] "Starting service config controller"
	I0917 08:39:02.047353       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 08:39:02.047414       1 config.go:105] "Starting endpoint slice config controller"
	I0917 08:39:02.047425       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 08:39:02.048125       1 config.go:328] "Starting node config controller"
	I0917 08:39:02.048199       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 08:39:02.148044       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 08:39:02.148173       1 shared_informer.go:320] Caches are synced for service config
	I0917 08:39:02.150486       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] <==
	W0917 08:38:49.536513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0917 08:38:49.536752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0917 08:38:49.536844       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 08:38:49.536913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 08:38:49.537008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:49.536852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0917 08:38:49.537056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0917 08:38:49.536771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 08:38:49.537088       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536586       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 08:38:49.537126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 08:38:49.537153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.537194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0917 08:38:49.537194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 08:38:49.537213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0917 08:38:49.537222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:50.443859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 08:38:50.443910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:50.468561       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 08:38:50.468614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 08:38:50.759161       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 08:55:02 addons-093168 kubelet[1648]: E0917 08:55:02.257302    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563302257018058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:55:05 addons-093168 kubelet[1648]: E0917 08:55:05.936921    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="310f797d-f8e1-4d73-abe1-05f4dc832ecc"
	Sep 17 08:55:12 addons-093168 kubelet[1648]: E0917 08:55:12.259216    1648 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563312258970805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:55:12 addons-093168 kubelet[1648]: E0917 08:55:12.259250    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563312258970805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:55:12 addons-093168 kubelet[1648]: E0917 08:55:12.936806    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b6005bc-d2b8-4f48-bcf7-9878b2bf05d1"
	Sep 17 08:55:13 addons-093168 kubelet[1648]: E0917 08:55:13.990262    1648 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Sep 17 08:55:13 addons-093168 kubelet[1648]: E0917 08:55:13.990330    1648 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Sep 17 08:55:13 addons-093168 kubelet[1648]: E0917 08:55:13.990571    1648 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:busybox,Image:busybox:stable,Command:[sh -c echo 'local-path-provisioner' > /test/file1],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/test,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9njfw,ReadOnly:true,MountP
ath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-local-path_default(e7497496-c2fe-46d3-98d2-378a076580ac): ErrImagePull: loading manifest for target platform: reading manifest sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logge
r="UnhandledError"
	Sep 17 08:55:13 addons-093168 kubelet[1648]: E0917 08:55:13.992269    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="e7497496-c2fe-46d3-98d2-378a076580ac"
	Sep 17 08:55:22 addons-093168 kubelet[1648]: E0917 08:55:22.261795    1648 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563322261536631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:55:22 addons-093168 kubelet[1648]: E0917 08:55:22.261830    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563322261536631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:55:25 addons-093168 kubelet[1648]: E0917 08:55:25.936253    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b6005bc-d2b8-4f48-bcf7-9878b2bf05d1"
	Sep 17 08:55:26 addons-093168 kubelet[1648]: E0917 08:55:26.938041    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="e7497496-c2fe-46d3-98d2-378a076580ac"
	Sep 17 08:55:32 addons-093168 kubelet[1648]: E0917 08:55:32.264966    1648 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563332264699815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:55:32 addons-093168 kubelet[1648]: E0917 08:55:32.265005    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563332264699815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:55:36 addons-093168 kubelet[1648]: I0917 08:55:36.501522    1648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdzfh\" (UniqueName: \"kubernetes.io/projected/48e9bb6a-e161-4bfe-a8e4-14f5b970e50c-kube-api-access-kdzfh\") pod \"48e9bb6a-e161-4bfe-a8e4-14f5b970e50c\" (UID: \"48e9bb6a-e161-4bfe-a8e4-14f5b970e50c\") "
	Sep 17 08:55:36 addons-093168 kubelet[1648]: I0917 08:55:36.501598    1648 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/48e9bb6a-e161-4bfe-a8e4-14f5b970e50c-tmp-dir\") pod \"48e9bb6a-e161-4bfe-a8e4-14f5b970e50c\" (UID: \"48e9bb6a-e161-4bfe-a8e4-14f5b970e50c\") "
	Sep 17 08:55:36 addons-093168 kubelet[1648]: I0917 08:55:36.501964    1648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48e9bb6a-e161-4bfe-a8e4-14f5b970e50c-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "48e9bb6a-e161-4bfe-a8e4-14f5b970e50c" (UID: "48e9bb6a-e161-4bfe-a8e4-14f5b970e50c"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 17 08:55:36 addons-093168 kubelet[1648]: I0917 08:55:36.503345    1648 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e9bb6a-e161-4bfe-a8e4-14f5b970e50c-kube-api-access-kdzfh" (OuterVolumeSpecName: "kube-api-access-kdzfh") pod "48e9bb6a-e161-4bfe-a8e4-14f5b970e50c" (UID: "48e9bb6a-e161-4bfe-a8e4-14f5b970e50c"). InnerVolumeSpecName "kube-api-access-kdzfh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 08:55:36 addons-093168 kubelet[1648]: I0917 08:55:36.602827    1648 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/48e9bb6a-e161-4bfe-a8e4-14f5b970e50c-tmp-dir\") on node \"addons-093168\" DevicePath \"\""
	Sep 17 08:55:36 addons-093168 kubelet[1648]: I0917 08:55:36.602869    1648 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kdzfh\" (UniqueName: \"kubernetes.io/projected/48e9bb6a-e161-4bfe-a8e4-14f5b970e50c-kube-api-access-kdzfh\") on node \"addons-093168\" DevicePath \"\""
	Sep 17 08:55:36 addons-093168 kubelet[1648]: I0917 08:55:36.732447    1648 scope.go:117] "RemoveContainer" containerID="2999275b8545cb45b2cd31bad3055899097f827e984f8043dd0ee27009fbce00"
	Sep 17 08:55:36 addons-093168 kubelet[1648]: I0917 08:55:36.749496    1648 scope.go:117] "RemoveContainer" containerID="2999275b8545cb45b2cd31bad3055899097f827e984f8043dd0ee27009fbce00"
	Sep 17 08:55:36 addons-093168 kubelet[1648]: E0917 08:55:36.749886    1648 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2999275b8545cb45b2cd31bad3055899097f827e984f8043dd0ee27009fbce00\": container with ID starting with 2999275b8545cb45b2cd31bad3055899097f827e984f8043dd0ee27009fbce00 not found: ID does not exist" containerID="2999275b8545cb45b2cd31bad3055899097f827e984f8043dd0ee27009fbce00"
	Sep 17 08:55:36 addons-093168 kubelet[1648]: I0917 08:55:36.749925    1648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2999275b8545cb45b2cd31bad3055899097f827e984f8043dd0ee27009fbce00"} err="failed to get container status \"2999275b8545cb45b2cd31bad3055899097f827e984f8043dd0ee27009fbce00\": rpc error: code = NotFound desc = could not find container \"2999275b8545cb45b2cd31bad3055899097f827e984f8043dd0ee27009fbce00\": container with ID starting with 2999275b8545cb45b2cd31bad3055899097f827e984f8043dd0ee27009fbce00 not found: ID does not exist"
	
	
	==> storage-provisioner [6d7dbaef7a5cdfbfc36d8383927eea1f42c07e4bc01e6aa61dd711665433a6d2] <==
	I0917 08:39:42.145412       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 08:39:42.155383       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 08:39:42.155443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 08:39:42.163576       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 08:39:42.163731       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e63dab40-9e98-4f4f-adef-1b218f507e90", APIVersion:"v1", ResourceVersion:"911", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-093168_95b1dd30-5446-4b97-a4d9-95691f11eb5b became leader
	I0917 08:39:42.163849       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-093168_95b1dd30-5446-4b97-a4d9-95691f11eb5b!
	I0917 08:39:42.264554       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-093168_95b1dd30-5446-4b97-a4d9-95691f11eb5b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-093168 -n addons-093168
helpers_test.go:261: (dbg) Run:  kubectl --context addons-093168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox nginx task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-093168 describe pod busybox nginx task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-093168 describe pod busybox nginx task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp: exit status 1 (93.31771ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:41:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gdp6f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gdp6f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  14m                default-scheduler  Successfully assigned default/busybox to addons-093168
	  Normal   Pulling    12m (x4 over 14m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 14m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 14m)  kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m (x44 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:49:54 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dd297 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dd297:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m43s                default-scheduler  Successfully assigned default/nginx to addons-093168
	  Warning  Failed     5m7s                 kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     61s (x3 over 5m7s)   kubelet            Error: ErrImagePull
	  Warning  Failed     61s (x2 over 3m4s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    32s (x4 over 5m6s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     32s (x4 over 5m6s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    17s (x4 over 5m42s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:50:25 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gzwmm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-gzwmm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m12s                default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-093168
	  Warning  Failed     91s (x2 over 3m34s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     91s (x2 over 3m34s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    77s (x2 over 3m34s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     77s (x2 over 3m34s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    66s (x3 over 5m11s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:49:57 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9njfw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-9njfw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m40s                default-scheduler  Successfully assigned default/test-local-path to addons-093168
	  Normal   Pulling    98s (x3 over 5m38s)  kubelet            Pulling image "busybox:stable"
	  Warning  Failed     24s (x3 over 4m6s)   kubelet            Failed to pull image "busybox:stable": loading manifest for target platform: reading manifest sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     24s (x3 over 4m6s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    11s (x3 over 4m5s)   kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     11s (x3 over 4m5s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4qdns" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pzmkp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-093168 describe pod busybox nginx task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (288.95s)

                                                
                                    
x
+
TestAddons/parallel/CSI (402.1s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.230681ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-093168 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-093168 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d6751fd9-abc1-467e-8c01-3ceb5ddd295b] Pending
helpers_test.go:344: "task-pv-pod" [d6751fd9-abc1-467e-8c01-3ceb5ddd295b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d6751fd9-abc1-467e-8c01-3ceb5ddd295b] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004228458s
addons_test.go:590: (dbg) Run:  kubectl --context addons-093168 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-093168 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-093168 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-093168 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-093168 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-093168 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-093168 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [973c077a-45c1-4c85-bd62-419d8901a499] Pending
helpers_test.go:344: "task-pv-pod-restore" [973c077a-45c1-4c85-bd62-419d8901a499] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod-restore" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:627: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod-restore" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:627: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-093168 -n addons-093168
addons_test.go:627: TestAddons/parallel/CSI: showing logs for failed pods as of 2024-09-17 08:56:26.016399199 +0000 UTC m=+1103.298545079
addons_test.go:627: (dbg) Run:  kubectl --context addons-093168 describe po task-pv-pod-restore -n default
addons_test.go:627: (dbg) kubectl --context addons-093168 describe po task-pv-pod-restore -n default:
Name:             task-pv-pod-restore
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-093168/192.168.49.2
Start Time:       Tue, 17 Sep 2024 08:50:25 +0000
Labels:           app=task-pv-pod-restore
Annotations:      <none>
Status:           Pending
IP:               10.244.0.31
IPs:
IP:  10.244.0.31
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gzwmm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc-restore
ReadOnly:   false
kube-api-access-gzwmm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m1s                 default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-093168
Normal   Pulling    115s (x3 over 6m)    kubelet            Pulling image "docker.io/nginx"
Warning  Failed     42s (x3 over 4m23s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     42s (x3 over 4m23s)  kubelet            Error: ErrImagePull
Normal   BackOff    5s (x5 over 4m23s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     5s (x5 over 4m23s)   kubelet            Error: ImagePullBackOff
addons_test.go:627: (dbg) Run:  kubectl --context addons-093168 logs task-pv-pod-restore -n default
addons_test.go:627: (dbg) Non-zero exit: kubectl --context addons-093168 logs task-pv-pod-restore -n default: exit status 1 (66.556234ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod-restore" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:627: kubectl --context addons-093168 logs task-pv-pod-restore -n default: exit status 1
addons_test.go:628: failed waiting for pod task-pv-pod-restore: app=task-pv-pod-restore within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-093168
helpers_test.go:235: (dbg) docker inspect addons-093168:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926",
	        "Created": "2024-09-17T08:38:37.745470595Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 398166,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-17T08:38:37.853843611Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/hostname",
	        "HostsPath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/hosts",
	        "LogPath": "/var/lib/docker/containers/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926/f0cc99258b2f8ed70802ba77c0a9b220f3e493ee560fb155712909a41c373926-json.log",
	        "Name": "/addons-093168",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-093168:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-093168",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe-init/diff:/var/lib/docker/overlay2/22ea169b69b771958d5aa21d4886a5f67242c32d10a387f6aa1fe4f8feab18cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/95af62a6687ad75372dfb8581b583c95f263eb51112c65d22fd385483455f4fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-093168",
	                "Source": "/var/lib/docker/volumes/addons-093168/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-093168",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-093168",
	                "name.minikube.sigs.k8s.io": "addons-093168",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a27331437cb7fe2f3918d4f21c6d0976e37e8d2fb43412d6ed2152b1f3b4fa1d",
	            "SandboxKey": "/var/run/docker/netns/a27331437cb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-093168": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "b1ff23e6ca5d5222d1d8818100c713ebb16a506c62eb4243a00007b105030e92",
	                    "EndpointID": "6cf14f071fae4cd24a1dac2c9e7c6dc188dcb38a38a4daaba6556d5caaa91067",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-093168",
	                        "f0cc99258b2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-093168 -n addons-093168
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-093168 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-093168 logs -n 25: (1.210600853s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-963544   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | -p download-only-963544              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-963544              | download-only-963544   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | -o=json --download-only              | download-only-223077   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | -p download-only-223077              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-223077              | download-only-223077   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-963544              | download-only-963544   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-223077              | download-only-223077   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | --download-only -p                   | download-docker-146413 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | download-docker-146413               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-146413            | download-docker-146413 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | --download-only -p                   | binary-mirror-713061   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | binary-mirror-713061                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45413               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-713061              | binary-mirror-713061   | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| addons  | disable dashboard -p                 | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| start   | -p addons-093168 --wait=true         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:41 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | -p addons-093168                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | -p addons-093168                     |                        |         |         |                     |                     |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:49 UTC | 17 Sep 24 08:49 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-093168 ip                     | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	| addons  | addons-093168 addons disable         | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:50 UTC | 17 Sep 24 08:50 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:53 UTC | 17 Sep 24 08:53 UTC |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:53 UTC | 17 Sep 24 08:53 UTC |
	|         | addons-093168                        |                        |         |         |                     |                     |
	| addons  | addons-093168 addons                 | addons-093168          | jenkins | v1.34.0 | 17 Sep 24 08:55 UTC | 17 Sep 24 08:55 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 08:38:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 08:38:14.268718  397419 out.go:345] Setting OutFile to fd 1 ...
	I0917 08:38:14.268997  397419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:14.269006  397419 out.go:358] Setting ErrFile to fd 2...
	I0917 08:38:14.269011  397419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:14.269250  397419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 08:38:14.269979  397419 out.go:352] Setting JSON to false
	I0917 08:38:14.270971  397419 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8443,"bootTime":1726553851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 08:38:14.271094  397419 start.go:139] virtualization: kvm guest
	I0917 08:38:14.273237  397419 out.go:177] * [addons-093168] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 08:38:14.274641  397419 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 08:38:14.274672  397419 notify.go:220] Checking for updates...
	I0917 08:38:14.276997  397419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 08:38:14.277996  397419 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 08:38:14.278999  397419 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	I0917 08:38:14.280101  397419 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 08:38:14.281266  397419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 08:38:14.282616  397419 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 08:38:14.304074  397419 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 08:38:14.304175  397419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:14.349142  397419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 08:38:14.340459492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:14.349250  397419 docker.go:318] overlay module found
	I0917 08:38:14.351082  397419 out.go:177] * Using the docker driver based on user configuration
	I0917 08:38:14.352358  397419 start.go:297] selected driver: docker
	I0917 08:38:14.352372  397419 start.go:901] validating driver "docker" against <nil>
	I0917 08:38:14.352389  397419 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 08:38:14.353172  397419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:14.398286  397419 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-17 08:38:14.389900591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:14.398447  397419 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 08:38:14.398700  397419 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 08:38:14.400294  397419 out.go:177] * Using Docker driver with root privileges
	I0917 08:38:14.401571  397419 cni.go:84] Creating CNI manager for ""
	I0917 08:38:14.401650  397419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 08:38:14.401663  397419 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 08:38:14.401757  397419 start.go:340] cluster config:
	{Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 08:38:14.402986  397419 out.go:177] * Starting "addons-093168" primary control-plane node in "addons-093168" cluster
	I0917 08:38:14.404072  397419 cache.go:121] Beginning downloading kic base image for docker with crio
	I0917 08:38:14.405262  397419 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0917 08:38:14.406317  397419 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 08:38:14.406352  397419 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0917 08:38:14.406353  397419 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0917 08:38:14.406362  397419 cache.go:56] Caching tarball of preloaded images
	I0917 08:38:14.406475  397419 preload.go:172] Found /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 08:38:14.406487  397419 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0917 08:38:14.406819  397419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/config.json ...
	I0917 08:38:14.406838  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/config.json: {Name:mk614388e178da61bf05196ce91ed40880ae45f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:14.422815  397419 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0917 08:38:14.422934  397419 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0917 08:38:14.422949  397419 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0917 08:38:14.422954  397419 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0917 08:38:14.422960  397419 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0917 08:38:14.422968  397419 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0917 08:38:25.896345  397419 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0917 08:38:25.896393  397419 cache.go:194] Successfully downloaded all kic artifacts
	I0917 08:38:25.896448  397419 start.go:360] acquireMachinesLock for addons-093168: {Name:mkac87ef08cf18f2f3037d42f97e6975bc93fa09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 08:38:25.896575  397419 start.go:364] duration metric: took 100.043µs to acquireMachinesLock for "addons-093168"
	I0917 08:38:25.896610  397419 start.go:93] Provisioning new machine with config: &{Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 08:38:25.896717  397419 start.go:125] createHost starting for "" (driver="docker")
	I0917 08:38:25.898703  397419 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0917 08:38:25.898987  397419 start.go:159] libmachine.API.Create for "addons-093168" (driver="docker")
	I0917 08:38:25.899037  397419 client.go:168] LocalClient.Create starting
	I0917 08:38:25.899156  397419 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem
	I0917 08:38:26.182492  397419 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem
	I0917 08:38:26.297180  397419 cli_runner.go:164] Run: docker network inspect addons-093168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 08:38:26.312692  397419 cli_runner.go:211] docker network inspect addons-093168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 08:38:26.312773  397419 network_create.go:284] running [docker network inspect addons-093168] to gather additional debugging logs...
	I0917 08:38:26.312794  397419 cli_runner.go:164] Run: docker network inspect addons-093168
	W0917 08:38:26.328447  397419 cli_runner.go:211] docker network inspect addons-093168 returned with exit code 1
	I0917 08:38:26.328492  397419 network_create.go:287] error running [docker network inspect addons-093168]: docker network inspect addons-093168: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-093168 not found
	I0917 08:38:26.328507  397419 network_create.go:289] output of [docker network inspect addons-093168]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-093168 not found
	
	** /stderr **
	I0917 08:38:26.328630  397419 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 08:38:26.344660  397419 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b00bc0}
	I0917 08:38:26.344706  397419 network_create.go:124] attempt to create docker network addons-093168 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 08:38:26.344757  397419 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-093168 addons-093168
	I0917 08:38:26.403233  397419 network_create.go:108] docker network addons-093168 192.168.49.0/24 created
	I0917 08:38:26.403277  397419 kic.go:121] calculated static IP "192.168.49.2" for the "addons-093168" container
	I0917 08:38:26.403354  397419 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 08:38:26.419565  397419 cli_runner.go:164] Run: docker volume create addons-093168 --label name.minikube.sigs.k8s.io=addons-093168 --label created_by.minikube.sigs.k8s.io=true
	I0917 08:38:26.436382  397419 oci.go:103] Successfully created a docker volume addons-093168
	I0917 08:38:26.436456  397419 cli_runner.go:164] Run: docker run --rm --name addons-093168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093168 --entrypoint /usr/bin/test -v addons-093168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0917 08:38:33.360703  397419 cli_runner.go:217] Completed: docker run --rm --name addons-093168-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093168 --entrypoint /usr/bin/test -v addons-093168:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (6.924191678s)
	I0917 08:38:33.360734  397419 oci.go:107] Successfully prepared a docker volume addons-093168
	I0917 08:38:33.360748  397419 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 08:38:33.360770  397419 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 08:38:33.360820  397419 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-093168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 08:38:37.679996  397419 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-093168:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.31913353s)
	I0917 08:38:37.680031  397419 kic.go:203] duration metric: took 4.319258144s to extract preloaded images to volume ...
	W0917 08:38:37.680167  397419 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0917 08:38:37.680264  397419 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 08:38:37.730224  397419 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-093168 --name addons-093168 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-093168 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-093168 --network addons-093168 --ip 192.168.49.2 --volume addons-093168:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0917 08:38:38.015246  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Running}}
	I0917 08:38:38.033247  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:38.053229  397419 cli_runner.go:164] Run: docker exec addons-093168 stat /var/lib/dpkg/alternatives/iptables
	I0917 08:38:38.096763  397419 oci.go:144] the created container "addons-093168" has a running status.
	I0917 08:38:38.096799  397419 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa...
	I0917 08:38:38.316707  397419 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 08:38:38.338702  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:38.370614  397419 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 08:38:38.370640  397419 kic_runner.go:114] Args: [docker exec --privileged addons-093168 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 08:38:38.443014  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:38.468083  397419 machine.go:93] provisionDockerMachine start ...
	I0917 08:38:38.468181  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:38.487785  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:38.488024  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:38.488039  397419 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 08:38:38.683369  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-093168
	
	I0917 08:38:38.683409  397419 ubuntu.go:169] provisioning hostname "addons-093168"
	I0917 08:38:38.683487  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:38.701314  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:38.701561  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:38.701586  397419 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-093168 && echo "addons-093168" | sudo tee /etc/hostname
	I0917 08:38:38.842294  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-093168
	
	I0917 08:38:38.842367  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:38.858454  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:38.858651  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:38.858675  397419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-093168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-093168/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-093168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 08:38:38.987912  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 08:38:38.987964  397419 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19648-389277/.minikube CaCertPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19648-389277/.minikube}
	I0917 08:38:38.988009  397419 ubuntu.go:177] setting up certificates
	I0917 08:38:38.988022  397419 provision.go:84] configureAuth start
	I0917 08:38:38.988088  397419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093168
	I0917 08:38:39.005336  397419 provision.go:143] copyHostCerts
	I0917 08:38:39.005415  397419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19648-389277/.minikube/key.pem (1679 bytes)
	I0917 08:38:39.005548  397419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19648-389277/.minikube/ca.pem (1082 bytes)
	I0917 08:38:39.005641  397419 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19648-389277/.minikube/cert.pem (1123 bytes)
	I0917 08:38:39.005712  397419 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19648-389277/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca-key.pem org=jenkins.addons-093168 san=[127.0.0.1 192.168.49.2 addons-093168 localhost minikube]
	I0917 08:38:39.090312  397419 provision.go:177] copyRemoteCerts
	I0917 08:38:39.090393  397419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 08:38:39.090456  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.106972  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.200856  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 08:38:39.222438  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 08:38:39.243612  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 08:38:39.265193  397419 provision.go:87] duration metric: took 277.150434ms to configureAuth
	I0917 08:38:39.265224  397419 ubuntu.go:193] setting minikube options for container-runtime
	I0917 08:38:39.265409  397419 config.go:182] Loaded profile config "addons-093168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 08:38:39.265521  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.282135  397419 main.go:141] libmachine: Using SSH client type: native
	I0917 08:38:39.282384  397419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 08:38:39.282416  397419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 08:38:39.504192  397419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 08:38:39.504224  397419 machine.go:96] duration metric: took 1.036114607s to provisionDockerMachine
	I0917 08:38:39.504238  397419 client.go:171] duration metric: took 13.605190317s to LocalClient.Create
	I0917 08:38:39.504260  397419 start.go:167] duration metric: took 13.605271001s to libmachine.API.Create "addons-093168"
	I0917 08:38:39.504270  397419 start.go:293] postStartSetup for "addons-093168" (driver="docker")
	I0917 08:38:39.504289  397419 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 08:38:39.504344  397419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 08:38:39.504394  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.522028  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.616778  397419 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 08:38:39.619852  397419 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 08:38:39.619881  397419 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 08:38:39.619889  397419 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 08:38:39.619897  397419 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0917 08:38:39.619908  397419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19648-389277/.minikube/addons for local assets ...
	I0917 08:38:39.619990  397419 filesync.go:126] Scanning /home/jenkins/minikube-integration/19648-389277/.minikube/files for local assets ...
	I0917 08:38:39.620018  397419 start.go:296] duration metric: took 115.734968ms for postStartSetup
	I0917 08:38:39.620325  397419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093168
	I0917 08:38:39.637039  397419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/config.json ...
	I0917 08:38:39.637313  397419 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 08:38:39.637369  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.653547  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.748768  397419 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 08:38:39.752898  397419 start.go:128] duration metric: took 13.856163014s to createHost
	I0917 08:38:39.752925  397419 start.go:83] releasing machines lock for "addons-093168", held for 13.856335009s
	I0917 08:38:39.752987  397419 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-093168
	I0917 08:38:39.769324  397419 ssh_runner.go:195] Run: cat /version.json
	I0917 08:38:39.769390  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.769443  397419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 08:38:39.769521  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:39.786951  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.787867  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:39.941853  397419 ssh_runner.go:195] Run: systemctl --version
	I0917 08:38:39.946158  397419 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 08:38:40.084473  397419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 08:38:40.088727  397419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 08:38:40.106449  397419 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 08:38:40.106528  397419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 08:38:40.132230  397419 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 08:38:40.132261  397419 start.go:495] detecting cgroup driver to use...
	I0917 08:38:40.132294  397419 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 08:38:40.132351  397419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 08:38:40.146387  397419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 08:38:40.156232  397419 docker.go:217] disabling cri-docker service (if available) ...
	I0917 08:38:40.156282  397419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 08:38:40.168347  397419 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 08:38:40.181162  397419 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 08:38:40.257135  397419 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 08:38:40.333605  397419 docker.go:233] disabling docker service ...
	I0917 08:38:40.333673  397419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 08:38:40.351601  397419 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 08:38:40.362162  397419 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 08:38:40.440587  397419 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 08:38:40.525972  397419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 08:38:40.536529  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 08:38:40.551093  397419 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 08:38:40.551153  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.559832  397419 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 08:38:40.559898  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.568567  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.577380  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.585958  397419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 08:38:40.594312  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.603119  397419 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.617231  397419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 08:38:40.626110  397419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 08:38:40.634005  397419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 08:38:40.641779  397419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:40.712061  397419 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 08:38:40.806565  397419 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 08:38:40.806642  397419 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 08:38:40.809970  397419 start.go:563] Will wait 60s for crictl version
	I0917 08:38:40.810032  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:38:40.812917  397419 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 08:38:40.845887  397419 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 08:38:40.845982  397419 ssh_runner.go:195] Run: crio --version
	I0917 08:38:40.880638  397419 ssh_runner.go:195] Run: crio --version
	I0917 08:38:40.915800  397419 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0917 08:38:40.917229  397419 cli_runner.go:164] Run: docker network inspect addons-093168 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 08:38:40.933605  397419 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 08:38:40.937163  397419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 08:38:40.947226  397419 kubeadm.go:883] updating cluster {Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 08:38:40.947379  397419 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0917 08:38:40.947455  397419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 08:38:41.008460  397419 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 08:38:41.008482  397419 crio.go:433] Images already preloaded, skipping extraction
	I0917 08:38:41.008524  397419 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 08:38:41.040345  397419 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 08:38:41.040370  397419 cache_images.go:84] Images are preloaded, skipping loading
	I0917 08:38:41.040378  397419 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0917 08:38:41.040480  397419 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-093168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 08:38:41.040565  397419 ssh_runner.go:195] Run: crio config
	I0917 08:38:41.080761  397419 cni.go:84] Creating CNI manager for ""
	I0917 08:38:41.080783  397419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 08:38:41.080795  397419 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 08:38:41.080819  397419 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-093168 NodeName:addons-093168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 08:38:41.080967  397419 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-093168"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 08:38:41.081023  397419 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 08:38:41.089456  397419 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 08:38:41.089531  397419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 08:38:41.097438  397419 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 08:38:41.113372  397419 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 08:38:41.129326  397419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0917 08:38:41.144885  397419 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0917 08:38:41.147998  397419 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 08:38:41.157624  397419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:41.237475  397419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 08:38:41.249661  397419 certs.go:68] Setting up /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168 for IP: 192.168.49.2
	I0917 08:38:41.249683  397419 certs.go:194] generating shared ca certs ...
	I0917 08:38:41.249699  397419 certs.go:226] acquiring lock for ca certs: {Name:mk8da29d5216ae8373400245c621790543881904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.249825  397419 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key
	I0917 08:38:41.614404  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt ...
	I0917 08:38:41.614440  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt: {Name:mkd45d6a60b00dd159e65c0f1b6c2e5a8afabc01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.614666  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key ...
	I0917 08:38:41.614685  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key: {Name:mk5291de481583f940222c6612a96e62ccd87eec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.614788  397419 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key
	I0917 08:38:41.754351  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.crt ...
	I0917 08:38:41.754383  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.crt: {Name:mk27ce36d6db90e160bdb0276068ed953effdbf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.754586  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key ...
	I0917 08:38:41.754606  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key: {Name:mk3afa86519521f4fca302906407d013abfb0d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:41.754709  397419 certs.go:256] generating profile certs ...
	I0917 08:38:41.754798  397419 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.key
	I0917 08:38:41.754829  397419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt with IP's: []
	I0917 08:38:42.064154  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt ...
	I0917 08:38:42.064185  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: {Name:mk5cb5afe904908b0cba1bf17d824eee5c984153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.064362  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.key ...
	I0917 08:38:42.064377  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.key: {Name:mkf2e14b11acd2448049e231dd4ead7716664bd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.064476  397419 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d
	I0917 08:38:42.064507  397419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0917 08:38:42.261028  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d ...
	I0917 08:38:42.261067  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d: {Name:mk077ce39ea3bb757e6d6ad979b544d7da0b437c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.261244  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d ...
	I0917 08:38:42.261257  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d: {Name:mk33433d67eea38775352092fed9c6a72038761a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.261329  397419 certs.go:381] copying /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt.a71e237d -> /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt
	I0917 08:38:42.261432  397419 certs.go:385] copying /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key.a71e237d -> /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key
	I0917 08:38:42.261485  397419 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key
	I0917 08:38:42.261504  397419 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt with IP's: []
	I0917 08:38:42.508375  397419 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt ...
	I0917 08:38:42.508413  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt: {Name:mk89431354833730cad316e358f6ad32f98671ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.508622  397419 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key ...
	I0917 08:38:42.508638  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key: {Name:mk49266541348c002ddfe954fcac3e31b23d5e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:42.508851  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 08:38:42.508900  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/ca.pem (1082 bytes)
	I0917 08:38:42.508938  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/cert.pem (1123 bytes)
	I0917 08:38:42.508966  397419 certs.go:484] found cert: /home/jenkins/minikube-integration/19648-389277/.minikube/certs/key.pem (1679 bytes)
	I0917 08:38:42.509614  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 08:38:42.532076  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 08:38:42.553868  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 08:38:42.575679  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 08:38:42.597095  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 08:38:42.618358  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 08:38:42.639563  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 08:38:42.660637  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 08:38:42.681627  397419 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 08:38:42.702968  397419 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 08:38:42.718889  397419 ssh_runner.go:195] Run: openssl version
	I0917 08:38:42.724037  397419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 08:38:42.732397  397419 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:42.735486  397419 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:42.735536  397419 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 08:38:42.741586  397419 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 08:38:42.749881  397419 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 08:38:42.752874  397419 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 08:38:42.752930  397419 kubeadm.go:392] StartCluster: {Name:addons-093168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-093168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 08:38:42.753025  397419 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 08:38:42.753085  397419 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 08:38:42.786903  397419 cri.go:89] found id: ""
	I0917 08:38:42.786985  397419 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 08:38:42.796179  397419 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 08:38:42.804749  397419 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 08:38:42.804799  397419 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 08:38:42.812984  397419 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 08:38:42.813000  397419 kubeadm.go:157] found existing configuration files:
	
	I0917 08:38:42.813037  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 08:38:42.820866  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 08:38:42.820930  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 08:38:42.828240  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 08:38:42.835643  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 08:38:42.835737  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 08:38:42.843259  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 08:38:42.851080  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 08:38:42.851131  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 08:38:42.858437  397419 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 08:38:42.866098  397419 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 08:38:42.866156  397419 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 08:38:42.873252  397419 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 08:38:42.908386  397419 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 08:38:42.908464  397419 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 08:38:42.923732  397419 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 08:38:42.923800  397419 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0917 08:38:42.923834  397419 kubeadm.go:310] OS: Linux
	I0917 08:38:42.923879  397419 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 08:38:42.923964  397419 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0917 08:38:42.924025  397419 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 08:38:42.924093  397419 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 08:38:42.924167  397419 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 08:38:42.924236  397419 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 08:38:42.924302  397419 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 08:38:42.924375  397419 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 08:38:42.924442  397419 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0917 08:38:42.973444  397419 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 08:38:42.973610  397419 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 08:38:42.973749  397419 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 08:38:42.979391  397419 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 08:38:42.982351  397419 out.go:235]   - Generating certificates and keys ...
	I0917 08:38:42.982445  397419 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 08:38:42.982558  397419 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 08:38:43.304222  397419 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 08:38:43.356991  397419 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 08:38:43.472470  397419 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 08:38:43.631625  397419 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 08:38:43.778369  397419 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 08:38:43.778571  397419 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-093168 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 08:38:44.236292  397419 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 08:38:44.236448  397419 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-093168 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 08:38:44.386759  397419 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 08:38:44.547662  397419 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 08:38:45.256381  397419 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 08:38:45.256470  397419 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 08:38:45.352447  397419 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 08:38:45.496534  397419 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 08:38:45.783093  397419 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 08:38:45.948400  397419 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 08:38:46.126268  397419 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 08:38:46.126739  397419 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 08:38:46.129290  397419 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 08:38:46.131498  397419 out.go:235]   - Booting up control plane ...
	I0917 08:38:46.131624  397419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 08:38:46.131735  397419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 08:38:46.131825  397419 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 08:38:46.139890  397419 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 08:38:46.145973  397419 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 08:38:46.146041  397419 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 08:38:46.229694  397419 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 08:38:46.229838  397419 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 08:38:46.732374  397419 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.404175ms
	I0917 08:38:46.732502  397419 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 08:38:51.232483  397419 kubeadm.go:310] [api-check] The API server is healthy after 4.501470708s
	I0917 08:38:51.243357  397419 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 08:38:51.254150  397419 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 08:38:51.272346  397419 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 08:38:51.272569  397419 kubeadm.go:310] [mark-control-plane] Marking the node addons-093168 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 08:38:51.279966  397419 kubeadm.go:310] [bootstrap-token] Using token: k80no8.z164l1wfcaclt3ve
	I0917 08:38:51.281525  397419 out.go:235]   - Configuring RBAC rules ...
	I0917 08:38:51.281680  397419 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 08:38:51.284683  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 08:38:51.290003  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 08:38:51.293675  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 08:38:51.296125  397419 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 08:38:51.298653  397419 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 08:38:51.638681  397419 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 08:38:52.057839  397419 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 08:38:52.638211  397419 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 08:38:52.639067  397419 kubeadm.go:310] 
	I0917 08:38:52.639151  397419 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 08:38:52.639161  397419 kubeadm.go:310] 
	I0917 08:38:52.639256  397419 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 08:38:52.639296  397419 kubeadm.go:310] 
	I0917 08:38:52.639346  397419 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 08:38:52.639417  397419 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 08:38:52.639470  397419 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 08:38:52.639478  397419 kubeadm.go:310] 
	I0917 08:38:52.639522  397419 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 08:38:52.639529  397419 kubeadm.go:310] 
	I0917 08:38:52.639568  397419 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 08:38:52.639593  397419 kubeadm.go:310] 
	I0917 08:38:52.639638  397419 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 08:38:52.639707  397419 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 08:38:52.639770  397419 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 08:38:52.639776  397419 kubeadm.go:310] 
	I0917 08:38:52.639844  397419 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 08:38:52.639938  397419 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 08:38:52.639972  397419 kubeadm.go:310] 
	I0917 08:38:52.640081  397419 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k80no8.z164l1wfcaclt3ve \
	I0917 08:38:52.640203  397419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:df9ded58c525a6d55df91cd644932b8a694d03f6beda3e691beb74ea1851cf09 \
	I0917 08:38:52.640238  397419 kubeadm.go:310] 	--control-plane 
	I0917 08:38:52.640248  397419 kubeadm.go:310] 
	I0917 08:38:52.640345  397419 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 08:38:52.640356  397419 kubeadm.go:310] 
	I0917 08:38:52.640453  397419 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k80no8.z164l1wfcaclt3ve \
	I0917 08:38:52.640571  397419 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:df9ded58c525a6d55df91cd644932b8a694d03f6beda3e691beb74ea1851cf09 
	I0917 08:38:52.642642  397419 kubeadm.go:310] W0917 08:38:42.905770    1305 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 08:38:52.643061  397419 kubeadm.go:310] W0917 08:38:42.906409    1305 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 08:38:52.643311  397419 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0917 08:38:52.643438  397419 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 08:38:52.643454  397419 cni.go:84] Creating CNI manager for ""
	I0917 08:38:52.643464  397419 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 08:38:52.645324  397419 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0917 08:38:52.646624  397419 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 08:38:52.650315  397419 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0917 08:38:52.650335  397419 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 08:38:52.667218  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 08:38:52.889823  397419 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 08:38:52.889885  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:52.889918  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-093168 minikube.k8s.io/updated_at=2024_09_17T08_38_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61 minikube.k8s.io/name=addons-093168 minikube.k8s.io/primary=true
	I0917 08:38:52.897123  397419 ops.go:34] apiserver oom_adj: -16
	I0917 08:38:53.039509  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:53.539727  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:54.039909  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:54.539969  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:55.040209  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:55.540163  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:56.039997  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:56.540545  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:57.039787  397419 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 08:38:57.104143  397419 kubeadm.go:1113] duration metric: took 4.214320429s to wait for elevateKubeSystemPrivileges
	I0917 08:38:57.104195  397419 kubeadm.go:394] duration metric: took 14.351272056s to StartCluster
	I0917 08:38:57.104218  397419 settings.go:142] acquiring lock: {Name:mk95cfba95882d4e40150b5e054772c8fe045040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:57.104356  397419 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 08:38:57.104769  397419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19648-389277/kubeconfig: {Name:mk341f12644f68f3679935ee94cc49d156e11570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 08:38:57.105015  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 08:38:57.105016  397419 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 08:38:57.105108  397419 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 08:38:57.105239  397419 config.go:182] Loaded profile config "addons-093168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 08:38:57.105256  397419 addons.go:69] Setting cloud-spanner=true in profile "addons-093168"
	I0917 08:38:57.105271  397419 addons.go:69] Setting gcp-auth=true in profile "addons-093168"
	I0917 08:38:57.105277  397419 addons.go:234] Setting addon cloud-spanner=true in "addons-093168"
	I0917 08:38:57.105276  397419 addons.go:69] Setting storage-provisioner=true in profile "addons-093168"
	I0917 08:38:57.105278  397419 addons.go:69] Setting volumesnapshots=true in profile "addons-093168"
	I0917 08:38:57.105298  397419 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-093168"
	I0917 08:38:57.105238  397419 addons.go:69] Setting yakd=true in profile "addons-093168"
	I0917 08:38:57.105296  397419 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-093168"
	I0917 08:38:57.105312  397419 addons.go:69] Setting registry=true in profile "addons-093168"
	I0917 08:38:57.105312  397419 addons.go:234] Setting addon volumesnapshots=true in "addons-093168"
	I0917 08:38:57.105317  397419 addons.go:234] Setting addon yakd=true in "addons-093168"
	I0917 08:38:57.105321  397419 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-093168"
	I0917 08:38:57.105323  397419 addons.go:69] Setting helm-tiller=true in profile "addons-093168"
	I0917 08:38:57.105332  397419 addons.go:69] Setting metrics-server=true in profile "addons-093168"
	I0917 08:38:57.105335  397419 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-093168"
	I0917 08:38:57.105259  397419 addons.go:69] Setting volcano=true in profile "addons-093168"
	I0917 08:38:57.105344  397419 addons.go:234] Setting addon metrics-server=true in "addons-093168"
	I0917 08:38:57.105245  397419 addons.go:69] Setting inspektor-gadget=true in profile "addons-093168"
	I0917 08:38:57.105347  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105351  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105324  397419 addons.go:234] Setting addon registry=true in "addons-093168"
	I0917 08:38:57.105357  397419 addons.go:234] Setting addon inspektor-gadget=true in "addons-093168"
	I0917 08:38:57.105362  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105353  397419 addons.go:234] Setting addon volcano=true in "addons-093168"
	I0917 08:38:57.105486  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105291  397419 mustload.go:65] Loading cluster: addons-093168
	I0917 08:38:57.105336  397419 addons.go:234] Setting addon helm-tiller=true in "addons-093168"
	I0917 08:38:57.105608  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105707  397419 config.go:182] Loaded profile config "addons-093168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 08:38:57.105371  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105931  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105935  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105960  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105377  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.106050  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.106193  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105960  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.106458  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.106627  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105313  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105345  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.107248  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105376  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.105250  397419 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-093168"
	I0917 08:38:57.108052  397419 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-093168"
	I0917 08:38:57.108362  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.105380  397419 addons.go:69] Setting default-storageclass=true in profile "addons-093168"
	I0917 08:38:57.105302  397419 addons.go:234] Setting addon storage-provisioner=true in "addons-093168"
	I0917 08:38:57.105388  397419 addons.go:69] Setting ingress-dns=true in profile "addons-093168"
	I0917 08:38:57.105386  397419 addons.go:69] Setting ingress=true in profile "addons-093168"
	I0917 08:38:57.108644  397419 addons.go:234] Setting addon ingress=true in "addons-093168"
	I0917 08:38:57.108680  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.108700  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.108747  397419 addons.go:234] Setting addon ingress-dns=true in "addons-093168"
	I0917 08:38:57.108788  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.108821  397419 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-093168"
	I0917 08:38:57.112690  397419 out.go:177] * Verifying Kubernetes components...
	I0917 08:38:57.114189  397419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 08:38:57.124402  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.124402  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.124587  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.125036  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.125084  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.125993  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.143502  397419 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 08:38:57.144872  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 08:38:57.144901  397419 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 08:38:57.144980  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	W0917 08:38:57.150681  397419 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0917 08:38:57.153691  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.155722  397419 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0917 08:38:57.159231  397419 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 08:38:57.159256  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0917 08:38:57.159314  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.172289  397419 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 08:38:57.176642  397419 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 08:38:57.176666  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 08:38:57.176733  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.193988  397419 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 08:38:57.196004  397419 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 08:38:57.197115  397419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 08:38:57.197136  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 08:38:57.197200  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.202125  397419 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 08:38:57.203455  397419 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 08:38:57.203530  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 08:38:57.203679  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.204660  397419 addons.go:234] Setting addon default-storageclass=true in "addons-093168"
	I0917 08:38:57.204707  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.204824  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 08:38:57.205196  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.207284  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:38:57.207449  397419 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 08:38:57.208612  397419 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 08:38:57.208633  397419 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 08:38:57.208701  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.208883  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:38:57.210517  397419 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 08:38:57.210538  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 08:38:57.210595  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.210853  397419 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 08:38:57.212148  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 08:38:57.212167  397419 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 08:38:57.212221  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.216414  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.219236  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 08:38:57.221033  397419 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-093168"
	I0917 08:38:57.221085  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:38:57.221137  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 08:38:57.221157  397419 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 08:38:57.221227  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.221586  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:38:57.221963  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 08:38:57.223885  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 08:38:57.225253  397419 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 08:38:57.226499  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 08:38:57.226722  397419 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 08:38:57.226737  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 08:38:57.226802  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.229771  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 08:38:57.229842  397419 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 08:38:57.231204  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 08:38:57.231925  397419 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 08:38:57.231954  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 08:38:57.232015  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.240168  397419 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 08:38:57.240188  397419 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 08:38:57.240249  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.251934  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.253019  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.256107  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.256961  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 08:38:57.270556  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 08:38:57.272877  397419 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 08:38:57.274130  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 08:38:57.274138  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.274160  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 08:38:57.274232  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.286114  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.286432  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.286552  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.287928  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.292989  397419 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 08:38:57.293246  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.295525  397419 out.go:177]   - Using image docker.io/busybox:stable
	I0917 08:38:57.295767  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.297062  397419 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 08:38:57.297077  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 08:38:57.297117  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:38:57.299372  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.306226  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:38:57.314733  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	W0917 08:38:57.337065  397419 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 08:38:57.337105  397419 retry.go:31] will retry after 135.437372ms: ssh: handshake failed: EOF
	I0917 08:38:57.346335  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 08:38:57.356789  397419 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 08:38:57.538116  397419 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 08:38:57.538148  397419 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0917 08:38:57.541546  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 08:38:57.642930  397419 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 08:38:57.642961  397419 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0917 08:38:57.652875  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 08:38:57.652902  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 08:38:57.744251  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 08:38:57.752468  397419 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 08:38:57.752499  397419 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 08:38:57.753674  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 08:38:57.753698  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 08:38:57.833558  397419 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 08:38:57.833662  397419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 08:38:57.834064  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 08:38:57.835232  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 08:38:57.842341  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 08:38:57.842375  397419 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 08:38:57.849540  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 08:38:57.853917  397419 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 08:38:57.853947  397419 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 08:38:57.936443  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 08:38:57.936758  397419 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 08:38:57.936784  397419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 08:38:57.938952  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 08:38:57.941233  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 08:38:57.941258  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 08:38:58.033712  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 08:38:58.034229  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 08:38:58.034295  397419 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 08:38:58.046437  397419 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 08:38:58.046529  397419 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 08:38:58.047136  397419 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 08:38:58.047196  397419 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 08:38:58.133693  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 08:38:58.133782  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 08:38:58.139956  397419 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 08:38:58.139985  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 08:38:58.233802  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 08:38:58.233848  397419 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 08:38:58.252638  397419 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 08:38:58.252687  397419 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 08:38:58.254386  397419 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 08:38:58.254464  397419 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 08:38:58.333784  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 08:38:58.333878  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 08:38:58.449224  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 08:38:58.449259  397419 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 08:38:58.449658  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 08:38:58.548889  397419 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:38:58.548923  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 08:38:58.633498  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 08:38:58.633532  397419 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 08:38:58.633842  397419 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 08:38:58.633864  397419 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 08:38:58.634541  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 08:38:58.750791  397419 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 08:38:58.750827  397419 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 08:38:58.936229  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:38:59.233524  397419 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 08:38:59.233625  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 08:38:59.333560  397419 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 08:38:59.333595  397419 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 08:38:59.653548  397419 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 08:38:59.653582  397419 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 08:38:59.654019  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 08:38:59.654039  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 08:38:59.750974  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 08:38:59.844245  397419 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.497868768s)
	I0917 08:38:59.844279  397419 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0917 08:38:59.845507  397419 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.48868759s)
	I0917 08:38:59.846428  397419 node_ready.go:35] waiting up to 6m0s for node "addons-093168" to be "Ready" ...
	I0917 08:39:00.150766  397419 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 08:39:00.150864  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 08:39:00.241261  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 08:39:00.241385  397419 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 08:39:00.434396  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 08:39:00.434751  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 08:39:00.434837  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 08:39:00.550189  397419 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-093168" context rescaled to 1 replicas
	I0917 08:39:00.748755  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 08:39:00.748843  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 08:39:00.937410  397419 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 08:39:00.937442  397419 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 08:39:01.233803  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 08:39:01.943544  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:03.261179  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.719582492s)
	I0917 08:39:03.261217  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.516878812s)
	I0917 08:39:03.261224  397419 addons.go:475] Verifying addon ingress=true in "addons-093168"
	I0917 08:39:03.261298  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.427173682s)
	I0917 08:39:03.261369  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.426103213s)
	I0917 08:39:03.261406  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.411830401s)
	I0917 08:39:03.261493  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.325021448s)
	I0917 08:39:03.261534  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.322551933s)
	I0917 08:39:03.261613  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.227807299s)
	I0917 08:39:03.261653  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.811965691s)
	I0917 08:39:03.261677  397419 addons.go:475] Verifying addon registry=true in "addons-093168"
	I0917 08:39:03.261733  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.627156118s)
	I0917 08:39:03.261799  397419 addons.go:475] Verifying addon metrics-server=true in "addons-093168"
	I0917 08:39:03.263039  397419 out.go:177] * Verifying ingress addon...
	I0917 08:39:03.264106  397419 out.go:177] * Verifying registry addon...
	I0917 08:39:03.265798  397419 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 08:39:03.266577  397419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 08:39:03.338558  397419 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 08:39:03.338666  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:03.338842  397419 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 08:39:03.338910  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 08:39:03.344429  397419 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0917 08:39:03.835535  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:03.868020  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.931736693s)
	W0917 08:39:03.868122  397419 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 08:39:03.868142  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.117119927s)
	I0917 08:39:03.868181  397419 retry.go:31] will retry after 226.647603ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 08:39:03.868254  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.433802493s)
	I0917 08:39:03.869652  397419 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-093168 service yakd-dashboard -n yakd-dashboard
	
	I0917 08:39:03.934770  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:04.095668  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 08:39:04.269371  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:04.269859  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:04.350132  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:04.360728  397419 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 08:39:04.360808  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:39:04.384783  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:39:04.471408  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.23753895s)
	I0917 08:39:04.471460  397419 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-093168"
	I0917 08:39:04.473008  397419 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 08:39:04.475211  397419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 08:39:04.535330  397419 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 08:39:04.535353  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:04.598789  397419 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 08:39:04.615582  397419 addons.go:234] Setting addon gcp-auth=true in "addons-093168"
	I0917 08:39:04.615652  397419 host.go:66] Checking if "addons-093168" exists ...
	I0917 08:39:04.616089  397419 cli_runner.go:164] Run: docker container inspect addons-093168 --format={{.State.Status}}
	I0917 08:39:04.633132  397419 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 08:39:04.633192  397419 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-093168
	I0917 08:39:04.651065  397419 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/addons-093168/id_rsa Username:docker}
	I0917 08:39:04.769973  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:04.770233  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:05.035291  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:05.335175  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:05.336078  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:05.535256  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:05.769510  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:05.769763  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:05.979262  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:06.269556  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:06.269756  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:06.350348  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:06.479032  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:06.769819  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:06.770387  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:06.979151  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:06.991964  397419 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.89623192s)
	I0917 08:39:06.992009  397419 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.358851016s)
	I0917 08:39:06.993965  397419 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 08:39:06.995369  397419 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 08:39:06.996678  397419 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 08:39:06.996699  397419 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 08:39:07.050138  397419 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 08:39:07.050166  397419 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 08:39:07.070212  397419 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 08:39:07.070239  397419 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 08:39:07.088585  397419 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 08:39:07.269903  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:07.270150  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:07.478566  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:07.742409  397419 addons.go:475] Verifying addon gcp-auth=true in "addons-093168"
	I0917 08:39:07.743971  397419 out.go:177] * Verifying gcp-auth addon...
	I0917 08:39:07.746772  397419 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 08:39:07.749628  397419 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 08:39:07.749648  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:07.850058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:07.850470  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:07.980638  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:08.250181  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:08.269219  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:08.269486  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:08.478757  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:08.750637  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:08.769245  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:08.769763  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:08.849706  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:08.978545  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:09.250459  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:09.269495  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:09.269663  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:09.479237  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:09.749689  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:09.769399  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:09.769720  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:09.978863  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:10.250410  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:10.269526  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:10.269619  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:10.478837  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:10.750940  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:10.769805  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:10.770515  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:10.979280  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:11.249995  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:11.269719  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:11.270190  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:11.350491  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:11.478320  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:11.750247  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:11.769390  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:11.769429  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:11.978986  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:12.250516  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:12.269587  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:12.269693  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:12.480184  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:12.750404  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:12.769444  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:12.769591  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:12.978948  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:13.250817  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:13.269637  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:13.270016  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:13.479104  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:13.749738  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:13.769523  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:13.769820  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:13.850119  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:13.978949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:14.249884  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:14.269638  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:14.270062  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:14.479204  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:14.749928  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:14.769438  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:14.769821  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:14.978839  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:15.250562  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:15.269409  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:15.269947  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:15.478860  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:15.750835  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:15.769345  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:15.770015  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:15.850276  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:15.979293  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:16.250064  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:16.269826  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:16.270274  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:16.478595  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:16.750278  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:16.769441  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:16.769627  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:16.978785  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:17.249585  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:17.269341  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:17.269848  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:17.479260  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:17.749952  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:17.769578  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:17.769936  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:17.979325  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:18.249779  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:18.269465  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:18.269775  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:18.350075  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:18.478976  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:18.750758  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:18.769496  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:18.769979  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:18.979120  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:19.249745  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:19.269362  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:19.269944  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:19.479390  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:19.749971  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:19.769917  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:19.770115  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:19.978384  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:20.250150  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:20.269613  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:20.270040  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:20.479591  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:20.750572  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:20.769329  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:20.769808  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:20.849500  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:20.978496  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:21.250173  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:21.269174  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:21.269534  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:21.478769  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:21.751128  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:21.769357  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:21.769371  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:21.978913  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:22.250688  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:22.269349  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:22.269695  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:22.478881  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:22.750753  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:22.769486  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:22.769809  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:22.849938  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:22.981047  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:23.249913  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:23.269440  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:23.269919  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:23.478892  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:23.750856  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:23.769354  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:23.769865  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:23.978955  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:24.249899  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:24.269545  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:24.269991  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:24.479144  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:24.750022  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:24.769833  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:24.770464  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:24.978298  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:25.250252  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:25.269224  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:25.269557  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:25.350289  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:25.479127  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:25.749639  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:25.769205  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:25.769585  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:25.979064  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:26.250038  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:26.269663  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:26.270152  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:26.478995  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:26.750285  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:26.769308  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:26.769370  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:26.978745  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:27.250676  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:27.269322  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:27.269652  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:27.478412  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:27.750691  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:27.769200  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:27.769604  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:27.849933  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:27.979206  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:28.249964  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:28.269520  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:28.269919  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:28.479193  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:28.749933  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:28.769877  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:28.770211  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:28.979141  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:29.249874  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:29.270072  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:29.270348  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:29.478073  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:29.749899  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:29.769818  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:29.770374  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:29.979288  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:30.250272  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:30.269500  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:30.269546  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:30.350342  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:30.479086  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:30.749787  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:30.769541  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:30.770013  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:30.979093  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:31.250841  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:31.269421  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:31.269882  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:31.479027  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:31.749892  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:31.769497  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:31.769834  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:31.979224  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:32.250379  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:32.269381  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:32.269400  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:32.479357  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:32.750376  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:32.769602  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:32.769757  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:32.850423  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:32.979114  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:33.251004  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:33.269908  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:33.270175  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:33.479600  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:33.749949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:33.769584  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:33.770008  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:33.979236  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:34.250012  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:34.269687  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:34.270180  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:34.479255  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:34.750023  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:34.769580  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:34.770002  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:34.978387  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:35.250069  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:35.269828  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:35.270241  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:35.349451  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:35.478206  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:35.749945  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:35.769452  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:35.769865  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:35.978859  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:36.250835  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:36.269592  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:36.269917  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:36.478473  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:36.750428  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:36.769595  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:36.769685  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:36.978362  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:37.250516  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:37.269304  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:37.269681  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:37.350217  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:37.479043  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:37.750460  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:37.769597  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:37.769948  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:37.978771  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:38.250668  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:38.269338  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:38.269667  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:38.478938  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:38.750692  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:38.769540  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:38.770044  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:38.979152  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:39.249775  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:39.269195  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:39.269607  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:39.478771  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:39.750626  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:39.769136  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:39.769575  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:39.850038  397419 node_ready.go:53] node "addons-093168" has status "Ready":"False"
	I0917 08:39:39.979047  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:40.249695  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:40.269441  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:40.269779  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:40.479084  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:40.749817  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:40.769332  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:40.769870  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:40.978708  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:41.250949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:41.269314  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:41.269830  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:41.480399  397419 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 08:39:41.480422  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:41.760397  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:41.837192  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:41.837670  397419 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 08:39:41.837689  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:41.849891  397419 node_ready.go:49] node "addons-093168" has status "Ready":"True"
	I0917 08:39:41.849914  397419 node_ready.go:38] duration metric: took 42.0034583s for node "addons-093168" to be "Ready" ...
	I0917 08:39:41.849924  397419 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 08:39:41.858669  397419 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7lhft" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:42.038738  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:42.251747  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:42.352912  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:42.353583  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:42.479530  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:42.750176  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:42.770265  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:42.770895  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:42.979804  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:43.251776  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:43.351669  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:43.352090  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:43.364736  397419 pod_ready.go:93] pod "coredns-7c65d6cfc9-7lhft" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.364757  397419 pod_ready.go:82] duration metric: took 1.50606765s for pod "coredns-7c65d6cfc9-7lhft" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.364777  397419 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.369471  397419 pod_ready.go:93] pod "etcd-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.369494  397419 pod_ready.go:82] duration metric: took 4.709608ms for pod "etcd-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.369508  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.373655  397419 pod_ready.go:93] pod "kube-apiserver-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.373672  397419 pod_ready.go:82] duration metric: took 4.156439ms for pod "kube-apiserver-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.373680  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.377527  397419 pod_ready.go:93] pod "kube-controller-manager-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.377561  397419 pod_ready.go:82] duration metric: took 3.873985ms for pod "kube-controller-manager-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.377572  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t77c5" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.450713  397419 pod_ready.go:93] pod "kube-proxy-t77c5" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.450741  397419 pod_ready.go:82] duration metric: took 73.161651ms for pod "kube-proxy-t77c5" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.450755  397419 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.479047  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:43.750717  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:43.769660  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:43.769998  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:43.850947  397419 pod_ready.go:93] pod "kube-scheduler-addons-093168" in "kube-system" namespace has status "Ready":"True"
	I0917 08:39:43.850971  397419 pod_ready.go:82] duration metric: took 400.20789ms for pod "kube-scheduler-addons-093168" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.850982  397419 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace to be "Ready" ...
	I0917 08:39:43.980093  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:44.250260  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:44.269521  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:44.270044  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:44.479161  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:44.750804  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:44.770420  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:44.770636  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.035777  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:45.250723  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:45.269748  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:45.270038  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.480689  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:45.750763  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:45.769885  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:45.770680  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:45.857292  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:45.980017  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:46.250727  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:46.269788  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:46.270046  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:46.539234  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:46.751501  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:46.835507  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:46.836067  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.036749  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:47.250892  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:47.336881  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:47.336877  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.536654  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:47.750566  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:47.770379  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:47.770654  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:47.857353  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:47.980545  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:48.251036  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:48.270119  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:48.270766  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:48.481111  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:48.751338  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:48.770188  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:48.771890  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:48.980058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:49.250249  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:49.270268  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:49.270358  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:49.480036  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:49.750762  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:49.770978  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:49.772174  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:49.857941  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:49.980041  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:50.250706  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:50.269862  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:50.270014  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:50.480731  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:50.751060  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:50.770120  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:50.770641  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.035548  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:51.250927  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:51.337208  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:51.337503  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.480679  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:51.750819  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:51.769976  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:51.770649  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:51.980192  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:52.250287  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:52.273280  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:52.353216  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:52.356559  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:52.479644  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:52.750695  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:52.769840  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:52.769992  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:52.980341  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:53.250812  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:53.269713  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:53.269993  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:53.479306  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:53.751203  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:53.769942  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:53.770231  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:53.982444  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:54.251381  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:54.270391  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:54.270907  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:54.357551  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:54.479329  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:54.750585  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:54.769800  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:54.770242  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:54.980330  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:55.250105  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:55.272058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:55.272343  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:55.480049  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:55.750228  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:55.769721  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:55.769811  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:55.979630  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:56.250644  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:56.270143  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:56.270801  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:56.361917  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:56.535770  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:56.750820  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:56.770318  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:56.834677  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:57.037436  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:57.251657  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:57.338559  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:57.340296  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:57.539728  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:57.750702  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:57.836323  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:57.836465  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.035687  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:58.250979  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:58.270445  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:58.270847  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.480099  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:58.750815  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:58.770260  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:58.770835  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:58.858855  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:39:58.980298  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:59.250242  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:59.271058  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:59.271285  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:59.534742  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:39:59.749993  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:39:59.770735  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:39:59.770822  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:39:59.980421  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:00.250549  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:00.269795  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:00.270066  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:00.481133  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:00.750352  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:00.770060  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:00.770078  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:00.980516  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:01.250748  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:01.269906  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:01.270542  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:01.357167  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:01.479831  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:01.750735  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:01.851522  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:01.852196  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:01.980255  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:02.250668  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:02.270004  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:02.270239  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:02.480121  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:02.750937  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:02.770293  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:02.770548  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:02.980319  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:03.250471  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:03.269687  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:03.270015  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:03.358379  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:03.480308  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:03.750910  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:03.769915  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:03.770350  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:03.980888  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:04.250949  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:04.334052  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:04.334547  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:04.536288  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:04.751331  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:04.769923  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:04.770074  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:04.979484  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:05.250753  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:05.269588  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:05.270367  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:05.479717  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:05.750044  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:05.770343  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:05.770697  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:05.857232  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:05.980252  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:06.250527  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:06.269894  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:06.270178  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:06.479711  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:06.750183  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:06.771071  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:06.771665  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:06.979659  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:07.251357  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:07.270510  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:07.270939  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:07.480189  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:07.750845  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:07.770209  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:07.771533  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:07.857980  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:07.983095  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:08.250342  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:08.270999  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:08.271094  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:08.479975  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:08.751137  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:08.770431  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:08.770712  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:08.980321  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:09.251024  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:09.270126  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:09.270735  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:09.480983  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:09.751277  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:09.769930  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:09.770147  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:09.980150  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:10.250493  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:10.269821  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:10.271102  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 08:40:10.356970  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:10.481755  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:10.749841  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:10.769711  397419 kapi.go:107] duration metric: took 1m7.503126792s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 08:40:10.770295  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:10.979832  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:11.250142  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:11.270431  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:11.480956  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:11.753496  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:11.770003  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:11.980475  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:12.250784  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:12.270813  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:12.357211  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:12.480873  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:12.751126  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:12.770604  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:12.979811  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:13.250139  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:13.270888  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:13.480241  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:13.750443  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:13.769994  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:13.979631  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:14.250829  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:14.270340  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:14.480298  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:14.750382  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:14.769880  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:14.857115  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:14.980593  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:15.250737  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:15.269909  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:15.480460  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:15.750879  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:15.770052  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:15.979744  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:16.251095  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:16.270338  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:16.480567  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:16.749687  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:16.770077  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.035489  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:17.250313  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:17.269943  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.356644  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:17.480054  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:17.750392  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:17.769702  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:17.980088  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:18.250474  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:18.269932  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:18.511698  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:18.750521  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:18.852675  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:18.979597  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:19.249859  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:19.270206  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:19.357692  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:19.480159  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:19.750104  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:19.771108  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:19.979504  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:20.251660  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:20.271175  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:20.480098  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:20.750670  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:20.770690  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:20.980839  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:21.250744  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:21.270685  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:21.357832  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:21.480348  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:21.750284  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:21.769821  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:21.981107  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:22.249898  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 08:40:22.270237  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:22.480433  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:22.750573  397419 kapi.go:107] duration metric: took 1m15.003789133s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 08:40:22.752532  397419 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-093168 cluster.
	I0917 08:40:22.753817  397419 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 08:40:22.755155  397419 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 08:40:22.769882  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:22.979715  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:23.270378  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:23.480884  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:23.770749  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:23.856903  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:23.979682  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:24.270418  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:24.481750  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:24.838546  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:24.979926  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:25.336387  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:25.536841  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:25.836400  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:25.857822  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:26.038227  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:26.270962  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:26.480310  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:26.769993  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:26.979717  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:27.270245  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:27.479626  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:27.770138  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:27.979728  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:28.270445  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:28.357521  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:28.479512  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:28.771302  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:28.980203  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:29.272777  397419 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 08:40:29.479974  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:29.771290  397419 kapi.go:107] duration metric: took 1m26.505487302s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 08:40:30.036881  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:30.480783  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:30.856907  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:30.980652  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:31.480186  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:31.979880  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:32.481022  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:32.979408  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:33.357762  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:33.479779  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:33.979963  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:34.480525  397419 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 08:40:34.980951  397419 kapi.go:107] duration metric: took 1m30.505737137s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 08:40:35.011214  397419 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, helm-tiller, nvidia-device-plugin, storage-provisioner, metrics-server, default-storageclass, inspektor-gadget, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0917 08:40:35.088827  397419 addons.go:510] duration metric: took 1m37.983731495s for enable addons: enabled=[cloud-spanner ingress-dns helm-tiller nvidia-device-plugin storage-provisioner metrics-server default-storageclass inspektor-gadget yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0917 08:40:35.963282  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:38.356952  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:40.357057  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:42.857137  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:45.357585  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:47.415219  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:49.856695  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:52.357369  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:54.856959  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:56.857573  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:40:59.356748  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:01.357311  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:03.857150  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:05.857298  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:08.356921  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:10.856637  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:12.857089  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:15.356886  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:17.357162  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:19.857088  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:21.857768  397419 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"False"
	I0917 08:41:22.357225  397419 pod_ready.go:93] pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace has status "Ready":"True"
	I0917 08:41:22.357248  397419 pod_ready.go:82] duration metric: took 1m38.50625923s for pod "metrics-server-84c5f94fbc-bmr95" in "kube-system" namespace to be "Ready" ...
	I0917 08:41:22.357261  397419 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fxm5v" in "kube-system" namespace to be "Ready" ...
	I0917 08:41:22.361581  397419 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fxm5v" in "kube-system" namespace has status "Ready":"True"
	I0917 08:41:22.361602  397419 pod_ready.go:82] duration metric: took 4.33393ms for pod "nvidia-device-plugin-daemonset-fxm5v" in "kube-system" namespace to be "Ready" ...
	I0917 08:41:22.361622  397419 pod_ready.go:39] duration metric: took 1m40.511686973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 08:41:22.361642  397419 api_server.go:52] waiting for apiserver process to appear ...
	I0917 08:41:22.361682  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 08:41:22.361731  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 08:41:22.396772  397419 cri.go:89] found id: "a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:22.396810  397419 cri.go:89] found id: ""
	I0917 08:41:22.396820  397419 logs.go:276] 1 containers: [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d]
	I0917 08:41:22.396885  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.401393  397419 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 08:41:22.401457  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 08:41:22.433869  397419 cri.go:89] found id: "498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:22.433890  397419 cri.go:89] found id: ""
	I0917 08:41:22.433898  397419 logs.go:276] 1 containers: [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126]
	I0917 08:41:22.433944  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.437332  397419 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 08:41:22.437407  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 08:41:22.472376  397419 cri.go:89] found id: "5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:22.472397  397419 cri.go:89] found id: ""
	I0917 08:41:22.472404  397419 logs.go:276] 1 containers: [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd]
	I0917 08:41:22.472448  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.475763  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 08:41:22.475824  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 08:41:22.509241  397419 cri.go:89] found id: "e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:22.509272  397419 cri.go:89] found id: ""
	I0917 08:41:22.509284  397419 logs.go:276] 1 containers: [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141]
	I0917 08:41:22.509335  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.512804  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 08:41:22.512865  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 08:41:22.546986  397419 cri.go:89] found id: "3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:22.547007  397419 cri.go:89] found id: ""
	I0917 08:41:22.547015  397419 logs.go:276] 1 containers: [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22]
	I0917 08:41:22.547060  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.550402  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 08:41:22.550459  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 08:41:22.584566  397419 cri.go:89] found id: "3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:22.584588  397419 cri.go:89] found id: ""
	I0917 08:41:22.584604  397419 logs.go:276] 1 containers: [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894]
	I0917 08:41:22.584655  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.588033  397419 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 08:41:22.588092  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 08:41:22.621636  397419 cri.go:89] found id: "c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:22.621662  397419 cri.go:89] found id: ""
	I0917 08:41:22.621672  397419 logs.go:276] 1 containers: [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7]
	I0917 08:41:22.621725  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:22.625177  397419 logs.go:123] Gathering logs for dmesg ...
	I0917 08:41:22.625207  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 08:41:22.651122  397419 logs.go:123] Gathering logs for describe nodes ...
	I0917 08:41:22.651158  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 08:41:22.750350  397419 logs.go:123] Gathering logs for kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] ...
	I0917 08:41:22.750382  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:22.794944  397419 logs.go:123] Gathering logs for etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] ...
	I0917 08:41:22.794981  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:22.847406  397419 logs.go:123] Gathering logs for kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] ...
	I0917 08:41:22.847443  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:22.882612  397419 logs.go:123] Gathering logs for kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] ...
	I0917 08:41:22.882647  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:22.938657  397419 logs.go:123] Gathering logs for container status ...
	I0917 08:41:22.938694  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 08:41:22.980301  397419 logs.go:123] Gathering logs for kubelet ...
	I0917 08:41:22.980332  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 08:41:23.057322  397419 logs.go:123] Gathering logs for coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] ...
	I0917 08:41:23.057359  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:23.092524  397419 logs.go:123] Gathering logs for kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] ...
	I0917 08:41:23.092557  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:23.129832  397419 logs.go:123] Gathering logs for kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] ...
	I0917 08:41:23.129871  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:23.165427  397419 logs.go:123] Gathering logs for CRI-O ...
	I0917 08:41:23.165458  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 08:41:25.744385  397419 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 08:41:25.758404  397419 api_server.go:72] duration metric: took 2m28.653351209s to wait for apiserver process to appear ...
	I0917 08:41:25.758434  397419 api_server.go:88] waiting for apiserver healthz status ...
	I0917 08:41:25.758473  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 08:41:25.758517  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 08:41:25.791782  397419 cri.go:89] found id: "a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:25.791813  397419 cri.go:89] found id: ""
	I0917 08:41:25.791824  397419 logs.go:276] 1 containers: [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d]
	I0917 08:41:25.791876  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.795162  397419 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 08:41:25.795222  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 08:41:25.827605  397419 cri.go:89] found id: "498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:25.827632  397419 cri.go:89] found id: ""
	I0917 08:41:25.827642  397419 logs.go:276] 1 containers: [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126]
	I0917 08:41:25.827695  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.830956  397419 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 08:41:25.831016  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 08:41:25.864525  397419 cri.go:89] found id: "5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:25.864552  397419 cri.go:89] found id: ""
	I0917 08:41:25.864562  397419 logs.go:276] 1 containers: [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd]
	I0917 08:41:25.864628  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.867980  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 08:41:25.868042  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 08:41:25.901946  397419 cri.go:89] found id: "e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:25.901966  397419 cri.go:89] found id: ""
	I0917 08:41:25.901977  397419 logs.go:276] 1 containers: [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141]
	I0917 08:41:25.902026  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.905404  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 08:41:25.905458  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 08:41:25.938828  397419 cri.go:89] found id: "3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:25.938850  397419 cri.go:89] found id: ""
	I0917 08:41:25.938859  397419 logs.go:276] 1 containers: [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22]
	I0917 08:41:25.938905  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.942182  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 08:41:25.942243  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 08:41:25.975310  397419 cri.go:89] found id: "3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:25.975334  397419 cri.go:89] found id: ""
	I0917 08:41:25.975345  397419 logs.go:276] 1 containers: [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894]
	I0917 08:41:25.975405  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:25.978637  397419 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 08:41:25.978703  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 08:41:26.012169  397419 cri.go:89] found id: "c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:26.012190  397419 cri.go:89] found id: ""
	I0917 08:41:26.012200  397419 logs.go:276] 1 containers: [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7]
	I0917 08:41:26.012256  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:26.015540  397419 logs.go:123] Gathering logs for kubelet ...
	I0917 08:41:26.015562  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 08:41:26.093016  397419 logs.go:123] Gathering logs for kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] ...
	I0917 08:41:26.093054  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:26.136808  397419 logs.go:123] Gathering logs for etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] ...
	I0917 08:41:26.136847  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:26.188782  397419 logs.go:123] Gathering logs for kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] ...
	I0917 08:41:26.188814  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:26.226705  397419 logs.go:123] Gathering logs for kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] ...
	I0917 08:41:26.226736  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:26.259580  397419 logs.go:123] Gathering logs for CRI-O ...
	I0917 08:41:26.259609  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 08:41:26.335847  397419 logs.go:123] Gathering logs for container status ...
	I0917 08:41:26.335885  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 08:41:26.378206  397419 logs.go:123] Gathering logs for dmesg ...
	I0917 08:41:26.378237  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 08:41:26.404518  397419 logs.go:123] Gathering logs for describe nodes ...
	I0917 08:41:26.404550  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 08:41:26.508227  397419 logs.go:123] Gathering logs for coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] ...
	I0917 08:41:26.508263  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:26.543742  397419 logs.go:123] Gathering logs for kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] ...
	I0917 08:41:26.543777  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:26.600899  397419 logs.go:123] Gathering logs for kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] ...
	I0917 08:41:26.600938  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:29.138040  397419 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 08:41:29.142631  397419 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 08:41:29.143571  397419 api_server.go:141] control plane version: v1.31.1
	I0917 08:41:29.143606  397419 api_server.go:131] duration metric: took 3.385163598s to wait for apiserver health ...
	I0917 08:41:29.143621  397419 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 08:41:29.143650  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0917 08:41:29.143699  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0917 08:41:29.178086  397419 cri.go:89] found id: "a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:29.178111  397419 cri.go:89] found id: ""
	I0917 08:41:29.178121  397419 logs.go:276] 1 containers: [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d]
	I0917 08:41:29.178180  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.181712  397419 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0917 08:41:29.181779  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0917 08:41:29.215733  397419 cri.go:89] found id: "498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:29.215755  397419 cri.go:89] found id: ""
	I0917 08:41:29.215763  397419 logs.go:276] 1 containers: [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126]
	I0917 08:41:29.215809  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.219058  397419 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0917 08:41:29.219111  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0917 08:41:29.252251  397419 cri.go:89] found id: "5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:29.252272  397419 cri.go:89] found id: ""
	I0917 08:41:29.252279  397419 logs.go:276] 1 containers: [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd]
	I0917 08:41:29.252321  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.255633  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0917 08:41:29.255688  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0917 08:41:29.289333  397419 cri.go:89] found id: "e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:29.289359  397419 cri.go:89] found id: ""
	I0917 08:41:29.289369  397419 logs.go:276] 1 containers: [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141]
	I0917 08:41:29.289423  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.292943  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0917 08:41:29.292996  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0917 08:41:29.326709  397419 cri.go:89] found id: "3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:29.326731  397419 cri.go:89] found id: ""
	I0917 08:41:29.326739  397419 logs.go:276] 1 containers: [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22]
	I0917 08:41:29.326799  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.330170  397419 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0917 08:41:29.330226  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0917 08:41:29.363477  397419 cri.go:89] found id: "3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:29.363501  397419 cri.go:89] found id: ""
	I0917 08:41:29.363511  397419 logs.go:276] 1 containers: [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894]
	I0917 08:41:29.363567  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.366804  397419 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0917 08:41:29.366860  397419 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0917 08:41:29.399852  397419 cri.go:89] found id: "c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:29.399872  397419 cri.go:89] found id: ""
	I0917 08:41:29.399881  397419 logs.go:276] 1 containers: [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7]
	I0917 08:41:29.399934  397419 ssh_runner.go:195] Run: which crictl
	I0917 08:41:29.403233  397419 logs.go:123] Gathering logs for etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] ...
	I0917 08:41:29.403253  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126"
	I0917 08:41:29.451453  397419 logs.go:123] Gathering logs for kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] ...
	I0917 08:41:29.451484  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141"
	I0917 08:41:29.488951  397419 logs.go:123] Gathering logs for kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] ...
	I0917 08:41:29.488979  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22"
	I0917 08:41:29.523572  397419 logs.go:123] Gathering logs for kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] ...
	I0917 08:41:29.523603  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894"
	I0917 08:41:29.579709  397419 logs.go:123] Gathering logs for CRI-O ...
	I0917 08:41:29.579750  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0917 08:41:29.658415  397419 logs.go:123] Gathering logs for kubelet ...
	I0917 08:41:29.658455  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 08:41:29.735441  397419 logs.go:123] Gathering logs for dmesg ...
	I0917 08:41:29.735481  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 08:41:29.762124  397419 logs.go:123] Gathering logs for describe nodes ...
	I0917 08:41:29.762159  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 08:41:29.856247  397419 logs.go:123] Gathering logs for kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] ...
	I0917 08:41:29.856278  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d"
	I0917 08:41:29.902365  397419 logs.go:123] Gathering logs for coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] ...
	I0917 08:41:29.902398  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd"
	I0917 08:41:29.938050  397419 logs.go:123] Gathering logs for kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] ...
	I0917 08:41:29.938081  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7"
	I0917 08:41:29.973223  397419 logs.go:123] Gathering logs for container status ...
	I0917 08:41:29.973251  397419 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 08:41:32.526366  397419 system_pods.go:59] 19 kube-system pods found
	I0917 08:41:32.526399  397419 system_pods.go:61] "coredns-7c65d6cfc9-7lhft" [d955ab8f-33f3-4177-a7cf-29b7b9cc1102] Running
	I0917 08:41:32.526405  397419 system_pods.go:61] "csi-hostpath-attacher-0" [74cbb098-f189-44df-a4b9-3d4644fad690] Running
	I0917 08:41:32.526409  397419 system_pods.go:61] "csi-hostpath-resizer-0" [2d53c081-d93a-46a4-8b7b-29e15b9b485e] Running
	I0917 08:41:32.526413  397419 system_pods.go:61] "csi-hostpathplugin-lknd7" [3267ecfa-6ae5-4291-9944-574c0476e9ec] Running
	I0917 08:41:32.526416  397419 system_pods.go:61] "etcd-addons-093168" [a017480c-3ca0-477f-801b-630887a3efdd] Running
	I0917 08:41:32.526420  397419 system_pods.go:61] "kindnet-nvhtv" [2a27ef1d-01b4-4db6-9b83-51a2b2889bc2] Running
	I0917 08:41:32.526422  397419 system_pods.go:61] "kube-apiserver-addons-093168" [1b03826d-3f50-4a0c-a2ad-f8d354f0935a] Running
	I0917 08:41:32.526425  397419 system_pods.go:61] "kube-controller-manager-addons-093168" [2da0a6e2-49be-44c3-a463-463a9865310f] Running
	I0917 08:41:32.526428  397419 system_pods.go:61] "kube-ingress-dns-minikube" [236b5470-912c-4665-ae2a-0aeda61e0892] Running
	I0917 08:41:32.526432  397419 system_pods.go:61] "kube-proxy-t77c5" [76518769-e724-461e-8134-d120144d60a8] Running
	I0917 08:41:32.526436  397419 system_pods.go:61] "kube-scheduler-addons-093168" [8dbe178e-95a4-491e-a059-423f6b78f417] Running
	I0917 08:41:32.526441  397419 system_pods.go:61] "metrics-server-84c5f94fbc-bmr95" [48e9bb6a-e161-4bfe-a8e4-14f5b970e50c] Running
	I0917 08:41:32.526445  397419 system_pods.go:61] "nvidia-device-plugin-daemonset-fxm5v" [d00acbad-2301-4783-835a-f6133e77a22b] Running
	I0917 08:41:32.526450  397419 system_pods.go:61] "registry-66c9cd494c-8h9wm" [efc2db30-2af8-4cf7-a316-5dac4df4a136] Running
	I0917 08:41:32.526455  397419 system_pods.go:61] "registry-proxy-9plz8" [8bc41646-54c5-4d13-8d5f-bebcdc6f15ce] Running
	I0917 08:41:32.526461  397419 system_pods.go:61] "snapshot-controller-56fcc65765-md5h6" [ff141ee6-2569-49b0-8b1a-83d9a1a05178] Running
	I0917 08:41:32.526470  397419 system_pods.go:61] "snapshot-controller-56fcc65765-xdr22" [69737144-ad79-4db9-ae9c-e5575f580f48] Running
	I0917 08:41:32.526475  397419 system_pods.go:61] "storage-provisioner" [e20caa93-3db5-4d96-b8a8-7665d4f5437d] Running
	I0917 08:41:32.526483  397419 system_pods.go:61] "tiller-deploy-b48cc5f79-p6zds" [48ba15f8-54f5-410f-8c46-b15665532417] Running
	I0917 08:41:32.526493  397419 system_pods.go:74] duration metric: took 3.382863956s to wait for pod list to return data ...
	I0917 08:41:32.526503  397419 default_sa.go:34] waiting for default service account to be created ...
	I0917 08:41:32.529073  397419 default_sa.go:45] found service account: "default"
	I0917 08:41:32.529100  397419 default_sa.go:55] duration metric: took 2.584342ms for default service account to be created ...
	I0917 08:41:32.529110  397419 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 08:41:32.539148  397419 system_pods.go:86] 19 kube-system pods found
	I0917 08:41:32.539179  397419 system_pods.go:89] "coredns-7c65d6cfc9-7lhft" [d955ab8f-33f3-4177-a7cf-29b7b9cc1102] Running
	I0917 08:41:32.539185  397419 system_pods.go:89] "csi-hostpath-attacher-0" [74cbb098-f189-44df-a4b9-3d4644fad690] Running
	I0917 08:41:32.539189  397419 system_pods.go:89] "csi-hostpath-resizer-0" [2d53c081-d93a-46a4-8b7b-29e15b9b485e] Running
	I0917 08:41:32.539193  397419 system_pods.go:89] "csi-hostpathplugin-lknd7" [3267ecfa-6ae5-4291-9944-574c0476e9ec] Running
	I0917 08:41:32.539196  397419 system_pods.go:89] "etcd-addons-093168" [a017480c-3ca0-477f-801b-630887a3efdd] Running
	I0917 08:41:32.539200  397419 system_pods.go:89] "kindnet-nvhtv" [2a27ef1d-01b4-4db6-9b83-51a2b2889bc2] Running
	I0917 08:41:32.539203  397419 system_pods.go:89] "kube-apiserver-addons-093168" [1b03826d-3f50-4a0c-a2ad-f8d354f0935a] Running
	I0917 08:41:32.539207  397419 system_pods.go:89] "kube-controller-manager-addons-093168" [2da0a6e2-49be-44c3-a463-463a9865310f] Running
	I0917 08:41:32.539210  397419 system_pods.go:89] "kube-ingress-dns-minikube" [236b5470-912c-4665-ae2a-0aeda61e0892] Running
	I0917 08:41:32.539213  397419 system_pods.go:89] "kube-proxy-t77c5" [76518769-e724-461e-8134-d120144d60a8] Running
	I0917 08:41:32.539216  397419 system_pods.go:89] "kube-scheduler-addons-093168" [8dbe178e-95a4-491e-a059-423f6b78f417] Running
	I0917 08:41:32.539220  397419 system_pods.go:89] "metrics-server-84c5f94fbc-bmr95" [48e9bb6a-e161-4bfe-a8e4-14f5b970e50c] Running
	I0917 08:41:32.539223  397419 system_pods.go:89] "nvidia-device-plugin-daemonset-fxm5v" [d00acbad-2301-4783-835a-f6133e77a22b] Running
	I0917 08:41:32.539227  397419 system_pods.go:89] "registry-66c9cd494c-8h9wm" [efc2db30-2af8-4cf7-a316-5dac4df4a136] Running
	I0917 08:41:32.539230  397419 system_pods.go:89] "registry-proxy-9plz8" [8bc41646-54c5-4d13-8d5f-bebcdc6f15ce] Running
	I0917 08:41:32.539235  397419 system_pods.go:89] "snapshot-controller-56fcc65765-md5h6" [ff141ee6-2569-49b0-8b1a-83d9a1a05178] Running
	I0917 08:41:32.539242  397419 system_pods.go:89] "snapshot-controller-56fcc65765-xdr22" [69737144-ad79-4db9-ae9c-e5575f580f48] Running
	I0917 08:41:32.539245  397419 system_pods.go:89] "storage-provisioner" [e20caa93-3db5-4d96-b8a8-7665d4f5437d] Running
	I0917 08:41:32.539248  397419 system_pods.go:89] "tiller-deploy-b48cc5f79-p6zds" [48ba15f8-54f5-410f-8c46-b15665532417] Running
	I0917 08:41:32.539255  397419 system_pods.go:126] duration metric: took 10.139894ms to wait for k8s-apps to be running ...
	I0917 08:41:32.539265  397419 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 08:41:32.539310  397419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 08:41:32.550663  397419 system_svc.go:56] duration metric: took 11.387952ms WaitForService to wait for kubelet
	I0917 08:41:32.550703  397419 kubeadm.go:582] duration metric: took 2m35.445654974s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 08:41:32.550732  397419 node_conditions.go:102] verifying NodePressure condition ...
	I0917 08:41:32.553809  397419 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 08:41:32.553834  397419 node_conditions.go:123] node cpu capacity is 8
	I0917 08:41:32.553851  397419 node_conditions.go:105] duration metric: took 3.112867ms to run NodePressure ...
	I0917 08:41:32.553869  397419 start.go:241] waiting for startup goroutines ...
	I0917 08:41:32.553875  397419 start.go:246] waiting for cluster config update ...
	I0917 08:41:32.553893  397419 start.go:255] writing updated cluster config ...
	I0917 08:41:32.554149  397419 ssh_runner.go:195] Run: rm -f paused
	I0917 08:41:32.604339  397419 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 08:41:32.606540  397419 out.go:177] * Done! kubectl is now configured to use "addons-093168" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 08:55:52 addons-093168 crio[1031]: time="2024-09-17 08:55:52.173859525Z" level=info msg="Removing pod sandbox: 7951cf53f3ce5ae586d65f70b8664029da0c87e1bc35d6a8ac55eaf306a99602" id=7a8d4b23-6c5f-4c6a-a8b5-79e55db5694a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 17 08:55:52 addons-093168 crio[1031]: time="2024-09-17 08:55:52.181471123Z" level=info msg="Removed pod sandbox: 7951cf53f3ce5ae586d65f70b8664029da0c87e1bc35d6a8ac55eaf306a99602" id=7a8d4b23-6c5f-4c6a-a8b5-79e55db5694a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 17 08:55:52 addons-093168 crio[1031]: time="2024-09-17 08:55:52.936158478Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cac1184a-85fa-445b-ac96-373743af1b2d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:52 addons-093168 crio[1031]: time="2024-09-17 08:55:52.936231527Z" level=info msg="Checking image status: busybox:stable" id=36c90e83-6e09-41e3-9605-39712c6715d8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:52 addons-093168 crio[1031]: time="2024-09-17 08:55:52.936369667Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cac1184a-85fa-445b-ac96-373743af1b2d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:52 addons-093168 crio[1031]: time="2024-09-17 08:55:52.936371592Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Sep 17 08:55:52 addons-093168 crio[1031]: time="2024-09-17 08:55:52.936474817Z" level=info msg="Image busybox:stable not found" id=36c90e83-6e09-41e3-9605-39712c6715d8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:56 addons-093168 crio[1031]: time="2024-09-17 08:55:56.938625856Z" level=info msg="Checking image status: docker.io/nginx:latest" id=30b1ac5f-fbb6-4b3b-a06a-069a3901bc15 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:55:56 addons-093168 crio[1031]: time="2024-09-17 08:55:56.938919660Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3 docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e],Size_:191853369,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=30b1ac5f-fbb6-4b3b-a06a-069a3901bc15 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:56:04 addons-093168 crio[1031]: time="2024-09-17 08:56:04.936317806Z" level=info msg="Checking image status: busybox:stable" id=deefe658-0535-411d-8629-0b2850f64611 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:56:04 addons-093168 crio[1031]: time="2024-09-17 08:56:04.936479608Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Sep 17 08:56:04 addons-093168 crio[1031]: time="2024-09-17 08:56:04.936577382Z" level=info msg="Image busybox:stable not found" id=deefe658-0535-411d-8629-0b2850f64611 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:56:07 addons-093168 crio[1031]: time="2024-09-17 08:56:07.935755155Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3c9af2de-1e63-4938-8e5d-2b7db45feb56 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:56:07 addons-093168 crio[1031]: time="2024-09-17 08:56:07.935809439Z" level=info msg="Checking image status: docker.io/nginx:latest" id=6e786147-d2fd-4b93-9778-660f927e1657 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:56:07 addons-093168 crio[1031]: time="2024-09-17 08:56:07.936033269Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3 docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e],Size_:191853369,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6e786147-d2fd-4b93-9778-660f927e1657 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:56:07 addons-093168 crio[1031]: time="2024-09-17 08:56:07.936046754Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3c9af2de-1e63-4938-8e5d-2b7db45feb56 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:56:15 addons-093168 crio[1031]: time="2024-09-17 08:56:15.518837913Z" level=info msg="Pulling image: busybox:stable" id=6781619e-2e98-46dc-850d-95bd3d2e881f name=/runtime.v1.ImageService/PullImage
	Sep 17 08:56:15 addons-093168 crio[1031]: time="2024-09-17 08:56:15.519032390Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Sep 17 08:56:15 addons-093168 crio[1031]: time="2024-09-17 08:56:15.556723096Z" level=info msg="Trying to access \"docker.io/library/busybox:stable\""
	Sep 17 08:56:19 addons-093168 crio[1031]: time="2024-09-17 08:56:19.935644545Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8e09f767-6ab1-40af-91c3-3783c178934b name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:56:19 addons-093168 crio[1031]: time="2024-09-17 08:56:19.935880329Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8e09f767-6ab1-40af-91c3-3783c178934b name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:56:21 addons-093168 crio[1031]: time="2024-09-17 08:56:21.936587441Z" level=info msg="Checking image status: docker.io/nginx:latest" id=ed23d46d-5802-4a95-83f0-b6ea6631f71d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:56:21 addons-093168 crio[1031]: time="2024-09-17 08:56:21.936843050Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3 docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e],Size_:191853369,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ed23d46d-5802-4a95-83f0-b6ea6631f71d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:56:26 addons-093168 crio[1031]: time="2024-09-17 08:56:26.936133701Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=4a1ac217-679d-40ea-8d4b-977c5862afa1 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 08:56:26 addons-093168 crio[1031]: time="2024-09-17 08:56:26.936396749Z" level=info msg="Image docker.io/nginx:alpine not found" id=4a1ac217-679d-40ea-8d4b-977c5862afa1 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	0906bd347c6d5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          15 minutes ago      Running             csi-snapshotter                          0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	f64b5aebbe7dd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 minutes ago      Running             csi-provisioner                          0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	eba5434cab6ab       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            15 minutes ago      Running             liveness-probe                           0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	057ac2c02266d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           15 minutes ago      Running             hostpath                                 0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	a0cca87be1a6f       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             15 minutes ago      Running             controller                               0                   2ba51e0898663       ingress-nginx-controller-bc57996ff-vgw4z
	db9ecacd5aed6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                16 minutes ago      Running             node-driver-registrar                    0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	843e30f0a0cf8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 16 minutes ago      Running             gcp-auth                                 0                   2e75c3dc5c24b       gcp-auth-89d5ffd79-xhlm6
	a53dfdb3b91a2       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              16 minutes ago      Running             csi-resizer                              0                   e4b2df5e4c60c       csi-hostpath-resizer-0
	a31591d3a75de       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             16 minutes ago      Running             local-path-provisioner                   0                   655c3c112fdda       local-path-provisioner-86d989889c-qkqjp
	221d8f80ce839       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             16 minutes ago      Running             csi-attacher                             0                   47552b94b1444       csi-hostpath-attacher-0
	12e5d8714fa59       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   16 minutes ago      Exited              patch                                    0                   4d5a9d109a211       ingress-nginx-admission-patch-pzmkp
	f921ee5175ec0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   16 minutes ago      Running             csi-external-health-monitor-controller   0                   2544e4c6b1b55       csi-hostpathplugin-lknd7
	a54dcb4e0840a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   16 minutes ago      Exited              create                                   0                   fc238c2462bf5       ingress-nginx-admission-create-4qdns
	b1aa0b4e6a00c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      16 minutes ago      Running             volume-snapshot-controller               0                   47f5d8b226a2a       snapshot-controller-56fcc65765-xdr22
	85332a0e5866e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      16 minutes ago      Running             volume-snapshot-controller               0                   f61171da5bfb1       snapshot-controller-56fcc65765-md5h6
	3300f395d8567       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             16 minutes ago      Running             minikube-ingress-dns                     0                   f7a1428432f34       kube-ingress-dns-minikube
	5eddba40afd11       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             16 minutes ago      Running             coredns                                  0                   ebe1938207849       coredns-7c65d6cfc9-7lhft
	6d7dbaef7a5cd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             16 minutes ago      Running             storage-provisioner                      0                   c9466fe8d518b       storage-provisioner
	3a8b894037793       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             17 minutes ago      Running             kube-proxy                               0                   eb334b9a5799a       kube-proxy-t77c5
	c9fa6b2ef5f0b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                                             17 minutes ago      Running             kindnet-cni                              0                   2e76c07fa96a5       kindnet-nvhtv
	e817293c644c7       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             17 minutes ago      Running             kube-scheduler                           0                   a4765fe76b73a       kube-scheduler-addons-093168
	3521aa957963e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             17 minutes ago      Running             kube-controller-manager                  0                   2608552715e00       kube-controller-manager-addons-093168
	498509ee96967       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             17 minutes ago      Running             etcd                                     0                   62ce9ab109c53       etcd-addons-093168
	a2e61e738c0da       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             17 minutes ago      Running             kube-apiserver                           0                   bceb5d8367d07       kube-apiserver-addons-093168
	
	
	==> coredns [5eddba40afd11915d95eb332fe89f8cb94d9dce20f3d8a6ac384f17db4fa96bd] <==
	[INFO] 10.244.0.11:33082 - 25853 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001192s
	[INFO] 10.244.0.11:37329 - 15527 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075609s
	[INFO] 10.244.0.11:37329 - 17316 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000121561s
	[INFO] 10.244.0.11:60250 - 35649 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005099659s
	[INFO] 10.244.0.11:60250 - 60739 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.006207516s
	[INFO] 10.244.0.11:37419 - 41998 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006428119s
	[INFO] 10.244.0.11:37419 - 39435 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006489964s
	[INFO] 10.244.0.11:56965 - 22146 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005110836s
	[INFO] 10.244.0.11:56965 - 41870 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005774722s
	[INFO] 10.244.0.11:40932 - 6018 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000055144s
	[INFO] 10.244.0.11:40932 - 2693 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093554s
	[INFO] 10.244.0.20:60603 - 21372 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000239521s
	[INFO] 10.244.0.20:56296 - 33744 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000369472s
	[INFO] 10.244.0.20:40076 - 30284 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123756s
	[INFO] 10.244.0.20:49639 - 52270 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158323s
	[INFO] 10.244.0.20:40994 - 1923 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099192s
	[INFO] 10.244.0.20:37435 - 32231 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000168193s
	[INFO] 10.244.0.20:36201 - 45290 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008885924s
	[INFO] 10.244.0.20:59898 - 55008 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008870022s
	[INFO] 10.244.0.20:43991 - 39302 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007846244s
	[INFO] 10.244.0.20:58304 - 34077 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008338334s
	[INFO] 10.244.0.20:34428 - 29339 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006763856s
	[INFO] 10.244.0.20:47732 - 9825 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007153268s
	[INFO] 10.244.0.20:52184 - 47443 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000802704s
	[INFO] 10.244.0.20:41521 - 18294 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000879797s
	
	
	==> describe nodes <==
	Name:               addons-093168
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-093168
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=addons-093168
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T08_38_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-093168
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-093168"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 08:38:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-093168
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 08:56:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 08:55:32 +0000   Tue, 17 Sep 2024 08:38:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 08:55:32 +0000   Tue, 17 Sep 2024 08:38:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 08:55:32 +0000   Tue, 17 Sep 2024 08:38:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 08:55:32 +0000   Tue, 17 Sep 2024 08:39:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-093168
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 16fdb73868874fa2aa4322a27fc496be
	  System UUID:                7036efa9-bcf4-469e-8312-994f69eacc62
	  Boot ID:                    8c59a26b-5d0c-4753-9e88-ef03399e569b
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  default                     task-pv-pod-restore                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  default                     test-local-path                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	  gcp-auth                    gcp-auth-89d5ffd79-xhlm6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vgw4z    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         17m
	  kube-system                 coredns-7c65d6cfc9-7lhft                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 csi-hostpathplugin-lknd7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-addons-093168                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-nvhtv                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-addons-093168                250m (3%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-093168       200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-t77c5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-093168                100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 snapshot-controller-56fcc65765-md5h6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 snapshot-controller-56fcc65765-xdr22        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  local-path-storage          local-path-provisioner-86d989889c-qkqjp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 17m   kube-proxy       
	  Normal   Starting                 17m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m   kubelet          Node addons-093168 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m   kubelet          Node addons-093168 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m   kubelet          Node addons-093168 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m   node-controller  Node addons-093168 event: Registered Node addons-093168 in Controller
	  Normal   NodeReady                16m   kubelet          Node addons-093168 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba ff 74 a1 5e 3b 08 06
	[ +13.302976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e 08 54 46 b8 ba 08 06
	[  +0.000352] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ba ff 74 a1 5e 3b 08 06
	[Sep17 08:24] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a 24 b9 ac 9a ab 08 06
	[  +0.000405] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a b6 29 69 41 ca 08 06
	[ +18.455196] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 92 00 b0 ac cb 10 08 06
	[  +0.102770] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 8d 84 a2 25 2e 08 06
	[ +10.887970] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev cni0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[  +0.094820] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[Sep17 08:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 14 a2 f8 f7 06 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[ +21.407596] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 7a 9f 11 c8 01 08 06
	[  +0.000366] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 22 8d 84 a2 25 2e 08 06
	
	
	==> etcd [498509ee9696754dc0cf3ded43f8b69e309646ab8889fe9d00bbd212c8ce0126] <==
	{"level":"info","ts":"2024-09-17T08:39:00.944484Z","caller":"traceutil/trace.go:171","msg":"trace[1814892135] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:389; }","duration":"195.704417ms","start":"2024-09-17T08:39:00.748773Z","end":"2024-09-17T08:39:00.944477Z","steps":["trace[1814892135] 'agreement among raft nodes before linearized reading'  (duration: 193.700916ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:00.942519Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.799596ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-17T08:39:00.944656Z","caller":"traceutil/trace.go:171","msg":"trace[1494037761] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:389; }","duration":"195.932813ms","start":"2024-09-17T08:39:00.748716Z","end":"2024-09-17T08:39:00.944649Z","steps":["trace[1494037761] 'agreement among raft nodes before linearized reading'  (duration: 193.78917ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.236883Z","caller":"traceutil/trace.go:171","msg":"trace[1393862041] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"189.03103ms","start":"2024-09-17T08:39:01.047836Z","end":"2024-09-17T08:39:01.236868Z","steps":["trace[1393862041] 'process raft request'  (duration: 84.371141ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246393Z","caller":"traceutil/trace.go:171","msg":"trace[350871136] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"192.090658ms","start":"2024-09-17T08:39:01.054286Z","end":"2024-09-17T08:39:01.246377Z","steps":["trace[350871136] 'process raft request'  (duration: 192.056665ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246556Z","caller":"traceutil/trace.go:171","msg":"trace[288716589] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"192.561769ms","start":"2024-09-17T08:39:01.053978Z","end":"2024-09-17T08:39:01.246540Z","steps":["trace[288716589] 'process raft request'  (duration: 192.289701ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246589Z","caller":"traceutil/trace.go:171","msg":"trace[842047613] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"194.309372ms","start":"2024-09-17T08:39:01.052273Z","end":"2024-09-17T08:39:01.246583Z","steps":["trace[842047613] 'process raft request'  (duration: 193.860025ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246756Z","caller":"traceutil/trace.go:171","msg":"trace[874038599] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"192.611349ms","start":"2024-09-17T08:39:01.054136Z","end":"2024-09-17T08:39:01.246747Z","steps":["trace[874038599] 'process raft request'  (duration: 192.166716ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.246789Z","caller":"traceutil/trace.go:171","msg":"trace[832402900] linearizableReadLoop","detail":"{readStateIndex:412; appliedIndex:412; }","duration":"107.196849ms","start":"2024-09-17T08:39:01.139584Z","end":"2024-09-17T08:39:01.246781Z","steps":["trace[832402900] 'read index received'  (duration: 107.193495ms)","trace[832402900] 'applied index is now lower than readState.Index'  (duration: 2.936µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T08:39:01.246842Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.242882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:01.247903Z","caller":"traceutil/trace.go:171","msg":"trace[1595279853] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:401; }","duration":"108.044342ms","start":"2024-09-17T08:39:01.139580Z","end":"2024-09-17T08:39:01.247624Z","steps":["trace[1595279853] 'agreement among raft nodes before linearized reading'  (duration: 107.221566ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:01.249317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.530022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:01.250846Z","caller":"traceutil/trace.go:171","msg":"trace[1335273238] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:407; }","duration":"111.069492ms","start":"2024-09-17T08:39:01.139765Z","end":"2024-09-17T08:39:01.250834Z","steps":["trace[1335273238] 'agreement among raft nodes before linearized reading'  (duration: 109.456626ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:39:01.249635Z","caller":"traceutil/trace.go:171","msg":"trace[134367931] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"109.798885ms","start":"2024-09-17T08:39:01.139825Z","end":"2024-09-17T08:39:01.249624Z","steps":["trace[134367931] 'process raft request'  (duration: 109.176303ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:01.250797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.892038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:01.251932Z","caller":"traceutil/trace.go:171","msg":"trace[1048075780] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:407; }","duration":"112.027319ms","start":"2024-09-17T08:39:01.139891Z","end":"2024-09-17T08:39:01.251919Z","steps":["trace[1048075780] 'agreement among raft nodes before linearized reading'  (duration: 110.877975ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:40:35.768719Z","caller":"traceutil/trace.go:171","msg":"trace[33144781] transaction","detail":"{read_only:false; response_revision:1201; number_of_response:1; }","duration":"100.543757ms","start":"2024-09-17T08:40:35.668147Z","end":"2024-09-17T08:40:35.768691Z","steps":["trace[33144781] 'process raft request'  (duration: 100.303667ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:40:35.958931Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.840736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-bmr95\" ","response":"range_response_count:1 size:4865"}
	{"level":"info","ts":"2024-09-17T08:40:35.958981Z","caller":"traceutil/trace.go:171","msg":"trace[13582332] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-bmr95; range_end:; response_count:1; response_revision:1201; }","duration":"105.907905ms","start":"2024-09-17T08:40:35.853062Z","end":"2024-09-17T08:40:35.958970Z","steps":["trace[13582332] 'range keys from in-memory index tree'  (duration: 105.71294ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:48:48.277449Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1537}
	{"level":"info","ts":"2024-09-17T08:48:48.301907Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1537,"took":"23.976999ms","hash":2118524458,"current-db-size-bytes":6434816,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3305472,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-17T08:48:48.301956Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2118524458,"revision":1537,"compact-revision":-1}
	{"level":"info","ts":"2024-09-17T08:53:48.282008Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1957}
	{"level":"info","ts":"2024-09-17T08:53:48.297895Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1957,"took":"15.390014ms","hash":1935728090,"current-db-size-bytes":6434816,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3989504,"current-db-size-in-use":"4.0 MB"}
	{"level":"info","ts":"2024-09-17T08:53:48.297952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1935728090,"revision":1957,"compact-revision":1537}
	
	
	==> gcp-auth [843e30f0a0cf860efc230a2a87deca3cc75d4f6408e31a84a0dd5b01df4dc08d] <==
	2024/09/17 08:41:32 Ready to write response ...
	2024/09/17 08:41:32 Ready to marshal response ...
	2024/09/17 08:41:32 Ready to write response ...
	2024/09/17 08:49:36 Ready to marshal response ...
	2024/09/17 08:49:36 Ready to write response ...
	2024/09/17 08:49:36 Ready to marshal response ...
	2024/09/17 08:49:36 Ready to write response ...
	2024/09/17 08:49:36 Ready to marshal response ...
	2024/09/17 08:49:36 Ready to write response ...
	2024/09/17 08:49:45 Ready to marshal response ...
	2024/09/17 08:49:45 Ready to write response ...
	2024/09/17 08:49:46 Ready to marshal response ...
	2024/09/17 08:49:46 Ready to write response ...
	2024/09/17 08:49:51 Ready to marshal response ...
	2024/09/17 08:49:51 Ready to write response ...
	2024/09/17 08:49:52 Ready to marshal response ...
	2024/09/17 08:49:52 Ready to write response ...
	2024/09/17 08:49:52 Ready to marshal response ...
	2024/09/17 08:49:52 Ready to write response ...
	2024/09/17 08:49:53 Ready to marshal response ...
	2024/09/17 08:49:53 Ready to write response ...
	2024/09/17 08:49:54 Ready to marshal response ...
	2024/09/17 08:49:54 Ready to write response ...
	2024/09/17 08:50:25 Ready to marshal response ...
	2024/09/17 08:50:25 Ready to write response ...
	
	
	==> kernel <==
	 08:56:27 up  2:38,  0 users,  load average: 0.01, 0.14, 0.50
	Linux addons-093168 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [c9fa6b2ef5f0bc8fa109e1c2c6daecd3d578a35690aeacf3d0d366b95c6135e7] <==
	I0917 08:54:21.148963       1 main.go:299] handling current node
	I0917 08:54:31.149251       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:54:31.149283       1 main.go:299] handling current node
	I0917 08:54:41.155277       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:54:41.155313       1 main.go:299] handling current node
	I0917 08:54:51.148947       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:54:51.148992       1 main.go:299] handling current node
	I0917 08:55:01.149259       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:55:01.149298       1 main.go:299] handling current node
	I0917 08:55:11.149704       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:55:11.149737       1 main.go:299] handling current node
	I0917 08:55:21.152017       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:55:21.152059       1 main.go:299] handling current node
	I0917 08:55:31.150239       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:55:31.150273       1 main.go:299] handling current node
	I0917 08:55:41.149094       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:55:41.149125       1 main.go:299] handling current node
	I0917 08:55:51.153849       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:55:51.153887       1 main.go:299] handling current node
	I0917 08:56:01.148935       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:56:01.148985       1 main.go:299] handling current node
	I0917 08:56:11.149025       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:56:11.149058       1 main.go:299] handling current node
	I0917 08:56:21.153988       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 08:56:21.154028       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a2e61e738c0da0f2a24020d6e0be37c9c714a07c86911a4809b0791fee42f97d] <==
	W0917 08:41:23.031581       1 handler_proxy.go:99] no RequestInfo found in the context
	W0917 08:41:23.031606       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 08:41:23.031645       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0917 08:41:23.031691       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 08:41:23.032764       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0917 08:41:23.032787       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0917 08:41:27.038506       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.221.184:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.221.184:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W0917 08:41:27.038723       1 handler_proxy.go:99] no RequestInfo found in the context
	E0917 08:41:27.039088       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0917 08:41:27.049456       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0917 08:49:36.125694       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.199.141"}
	E0917 08:49:48.897202       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40012: use of closed network connection
	E0917 08:49:48.922992       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.25:41648: read: connection reset by peer
	E0917 08:49:53.964352       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0917 08:49:54.758375       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0917 08:49:54.934461       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.248.144"}
	I0917 08:50:05.538716       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0917 08:53:08.601116       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0917 08:53:09.617791       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [3521aa957963e31e1c7db8feb7538578803ed46869f86ab8240988f001f8b894] <==
	I0917 08:53:03.110800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="8.873µs"
	E0917 08:53:09.619260       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:53:11.124524       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:53:11.124582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:53:13.274186       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:53:13.274230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:53:18.716391       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0917 08:53:19.260347       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:53:19.260394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:53:26.564141       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0917 08:53:26.564179       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 08:53:26.970602       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0917 08:53:26.970647       1 shared_informer.go:320] Caches are synced for garbage collector
	W0917 08:53:31.159305       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:53:31.159360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:53:52.995298       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:53:52.995348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:54:39.470027       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:54:39.470076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:55:18.328582       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:55:18.328633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:55:32.767462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-093168"
	I0917 08:55:35.162554       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="7.818µs"
	W0917 08:55:52.138508       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:55:52.138568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [3a8b89403779369b6c149b1229a8d3591bd05a7e4727228239eaa4cf14ad1c22] <==
	I0917 08:39:00.642627       1 server_linux.go:66] "Using iptables proxy"
	I0917 08:39:01.648049       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 08:39:01.648220       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 08:39:02.034353       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 08:39:02.034507       1 server_linux.go:169] "Using iptables Proxier"
	I0917 08:39:02.043649       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 08:39:02.044366       1 server.go:483] "Version info" version="v1.31.1"
	I0917 08:39:02.044467       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 08:39:02.047306       1 config.go:199] "Starting service config controller"
	I0917 08:39:02.047353       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 08:39:02.047414       1 config.go:105] "Starting endpoint slice config controller"
	I0917 08:39:02.047425       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 08:39:02.048125       1 config.go:328] "Starting node config controller"
	I0917 08:39:02.048199       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 08:39:02.148044       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 08:39:02.148173       1 shared_informer.go:320] Caches are synced for service config
	I0917 08:39:02.150486       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e817293c644c7b70a5555957d018f075a9268888e92ab5b5942d0cff022ef141] <==
	W0917 08:38:49.536513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0917 08:38:49.536752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0917 08:38:49.536844       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 08:38:49.536913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 08:38:49.537008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:49.536852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0917 08:38:49.537056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0917 08:38:49.536771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 08:38:49.537088       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536586       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 08:38:49.537126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.536628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 08:38:49.537153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:49.537194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0917 08:38:49.537194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 08:38:49.537213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0917 08:38:49.537222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:50.443859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 08:38:50.443910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:50.468561       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 08:38:50.468614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0917 08:38:50.759161       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 08:55:42 addons-093168 kubelet[1648]: E0917 08:55:42.267518    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563342267192388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:55:44 addons-093168 kubelet[1648]: E0917 08:55:44.748154    1648 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 17 08:55:44 addons-093168 kubelet[1648]: E0917 08:55:44.748229    1648 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 17 08:55:44 addons-093168 kubelet[1648]: E0917 08:55:44.748468    1648 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveRea
dOnly:nil,},VolumeMount{Name:kube-api-access-gzwmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod-restore_default(973c077a-45c1-4c85-bd62-419d8901a499): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 08:55:44 addons-093168 kubelet[1648]: E0917 08:55:44.750121    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="973c077a-45c1-4c85-bd62-419d8901a499"
	Sep 17 08:55:52 addons-093168 kubelet[1648]: E0917 08:55:52.270341    1648 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563352270097841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:55:52 addons-093168 kubelet[1648]: E0917 08:55:52.270383    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563352270097841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:55:52 addons-093168 kubelet[1648]: E0917 08:55:52.936608    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b6005bc-d2b8-4f48-bcf7-9878b2bf05d1"
	Sep 17 08:55:52 addons-093168 kubelet[1648]: E0917 08:55:52.936655    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="e7497496-c2fe-46d3-98d2-378a076580ac"
	Sep 17 08:55:56 addons-093168 kubelet[1648]: E0917 08:55:56.939236    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="973c077a-45c1-4c85-bd62-419d8901a499"
	Sep 17 08:56:02 addons-093168 kubelet[1648]: E0917 08:56:02.272219    1648 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563362271963200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:56:02 addons-093168 kubelet[1648]: E0917 08:56:02.272262    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563362271963200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:56:07 addons-093168 kubelet[1648]: E0917 08:56:07.936271    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b6005bc-d2b8-4f48-bcf7-9878b2bf05d1"
	Sep 17 08:56:07 addons-093168 kubelet[1648]: E0917 08:56:07.936272    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="973c077a-45c1-4c85-bd62-419d8901a499"
	Sep 17 08:56:12 addons-093168 kubelet[1648]: E0917 08:56:12.275047    1648 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563372274772458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:56:12 addons-093168 kubelet[1648]: E0917 08:56:12.275095    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563372274772458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:56:15 addons-093168 kubelet[1648]: E0917 08:56:15.517871    1648 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 08:56:15 addons-093168 kubelet[1648]: E0917 08:56:15.517948    1648 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 08:56:15 addons-093168 kubelet[1648]: E0917 08:56:15.518206    1648 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dd297,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:
,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_default(310f797d-f8e1-4d73-abe1-05f4dc832ecc): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 08:56:15 addons-093168 kubelet[1648]: E0917 08:56:15.519613    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="310f797d-f8e1-4d73-abe1-05f4dc832ecc"
	Sep 17 08:56:19 addons-093168 kubelet[1648]: E0917 08:56:19.936118    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0b6005bc-d2b8-4f48-bcf7-9878b2bf05d1"
	Sep 17 08:56:21 addons-093168 kubelet[1648]: E0917 08:56:21.937105    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="973c077a-45c1-4c85-bd62-419d8901a499"
	Sep 17 08:56:22 addons-093168 kubelet[1648]: E0917 08:56:22.277632    1648 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563382277352398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:56:22 addons-093168 kubelet[1648]: E0917 08:56:22.277664    1648 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563382277352398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:533532,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 08:56:26 addons-093168 kubelet[1648]: E0917 08:56:26.936676    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="310f797d-f8e1-4d73-abe1-05f4dc832ecc"
	
	
	==> storage-provisioner [6d7dbaef7a5cdfbfc36d8383927eea1f42c07e4bc01e6aa61dd711665433a6d2] <==
	I0917 08:39:42.145412       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 08:39:42.155383       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 08:39:42.155443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 08:39:42.163576       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 08:39:42.163731       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e63dab40-9e98-4f4f-adef-1b218f507e90", APIVersion:"v1", ResourceVersion:"911", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-093168_95b1dd30-5446-4b97-a4d9-95691f11eb5b became leader
	I0917 08:39:42.163849       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-093168_95b1dd30-5446-4b97-a4d9-95691f11eb5b!
	I0917 08:39:42.264554       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-093168_95b1dd30-5446-4b97-a4d9-95691f11eb5b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-093168 -n addons-093168
helpers_test.go:261: (dbg) Run:  kubectl --context addons-093168 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox nginx task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-093168 describe pod busybox nginx task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-093168 describe pod busybox nginx task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp: exit status 1 (90.434488ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:41:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gdp6f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gdp6f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/busybox to addons-093168
	  Normal   Pulling    13m (x4 over 14m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     13m (x4 over 14m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     13m (x4 over 14m)     kubelet            Error: ErrImagePull
	  Warning  Failed     13m (x6 over 14m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m51s (x44 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:49:54 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dd297 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dd297:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m34s                default-scheduler  Successfully assigned default/nginx to addons-093168
	  Warning  Failed     5m58s                kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    68s (x4 over 6m33s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     13s (x4 over 5m58s)  kubelet            Error: ErrImagePull
	  Warning  Failed     13s (x3 over 3m55s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2s (x5 over 5m57s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2s (x5 over 5m57s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:50:25 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gzwmm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-gzwmm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-093168
	  Normal   Pulling    117s (x3 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     44s (x3 over 4m25s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     44s (x3 over 4m25s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    7s (x5 over 4m25s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     7s (x5 over 4m25s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-093168/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 08:49:57 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9njfw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-9njfw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m31s                default-scheduler  Successfully assigned default/test-local-path to addons-093168
	  Warning  Failed     75s (x3 over 4m57s)  kubelet            Failed to pull image "busybox:stable": loading manifest for target platform: reading manifest sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     75s (x3 over 4m57s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    36s (x5 over 4m56s)  kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     36s (x5 over 4m56s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    24s (x4 over 6m29s)  kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4qdns" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pzmkp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-093168 describe pod busybox nginx task-pv-pod-restore test-local-path ingress-nginx-admission-create-4qdns ingress-nginx-admission-patch-pzmkp: exit status 1
--- FAIL: TestAddons/parallel/CSI (402.10s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (185.82s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-093168 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-093168 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093168 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e7497496-c2fe-46d3-98d2-378a076580ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/parallel/LocalPath: WARNING: pod list for "default" "run=test-local-path" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:995: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:995: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-093168 -n addons-093168
addons_test.go:995: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2024-09-17 08:52:57.529904773 +0000 UTC m=+894.812050653
addons_test.go:995: (dbg) Run:  kubectl --context addons-093168 describe po test-local-path -n default
addons_test.go:995: (dbg) kubectl --context addons-093168 describe po test-local-path -n default:
Name:             test-local-path
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-093168/192.168.49.2
Start Time:       Tue, 17 Sep 2024 08:49:57 +0000
Labels:           run=test-local-path
Annotations:      <none>
Status:           Pending
IP:               10.244.0.30
IPs:
IP:  10.244.0.30
Containers:
busybox:
Container ID:  
Image:         busybox:stable
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9njfw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  test-pvc
ReadOnly:   false
kube-api-access-9njfw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/test-local-path to addons-093168
Warning  Failed     86s                  kubelet            Failed to pull image "busybox:stable": loading manifest for target platform: reading manifest sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     86s                  kubelet            Error: ErrImagePull
Normal   BackOff    85s                  kubelet            Back-off pulling image "busybox:stable"
Warning  Failed     85s                  kubelet            Error: ImagePullBackOff
Normal   Pulling    74s (x2 over 2m58s)  kubelet            Pulling image "busybox:stable"
addons_test.go:995: (dbg) Run:  kubectl --context addons-093168 logs test-local-path -n default
addons_test.go:995: (dbg) Non-zero exit: kubectl --context addons-093168 logs test-local-path -n default: exit status 1 (66.019972ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:995: kubectl --context addons-093168 logs test-local-path -n default: exit status 1
addons_test.go:996: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
--- FAIL: TestAddons/parallel/LocalPath (185.82s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (187.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [66cb0295-9020-41f3-95a0-241a1b2a608e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004164937s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-554247 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-554247 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-554247 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-554247 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [23c013f5-68ca-4815-b0e1-e702efdd27de] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-554247 -n functional-554247
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-09-17 09:04:13.761377173 +0000 UTC m=+1571.043523055
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-554247 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-554247 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-554247/192.168.49.2
Start Time:       Tue, 17 Sep 2024 09:01:13 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bqxxv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-bqxxv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/sp-pod to functional-554247
Warning  Failed     90s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     90s                  kubelet            Error: ErrImagePull
Normal   BackOff    90s                  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     90s                  kubelet            Error: ImagePullBackOff
Normal   Pulling    76s (x2 over 2m59s)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-554247 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-554247 logs sp-pod -n default: exit status 1 (61.725439ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-554247 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-554247
helpers_test.go:235: (dbg) docker inspect functional-554247:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e87cd275fab4c9827668c14815faeeba86678c563155e153362cdb491868a2f",
	        "Created": "2024-09-17T08:58:53.568945055Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 423793,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-17T08:58:53.674146945Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/5e87cd275fab4c9827668c14815faeeba86678c563155e153362cdb491868a2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e87cd275fab4c9827668c14815faeeba86678c563155e153362cdb491868a2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e87cd275fab4c9827668c14815faeeba86678c563155e153362cdb491868a2f/hosts",
	        "LogPath": "/var/lib/docker/containers/5e87cd275fab4c9827668c14815faeeba86678c563155e153362cdb491868a2f/5e87cd275fab4c9827668c14815faeeba86678c563155e153362cdb491868a2f-json.log",
	        "Name": "/functional-554247",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-554247:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-554247",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2509ca71782f616f6c3e449bb9de96e62ab145f31f634cd4fc644e00bae5c4ea-init/diff:/var/lib/docker/overlay2/22ea169b69b771958d5aa21d4886a5f67242c32d10a387f6aa1fe4f8feab18cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2509ca71782f616f6c3e449bb9de96e62ab145f31f634cd4fc644e00bae5c4ea/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2509ca71782f616f6c3e449bb9de96e62ab145f31f634cd4fc644e00bae5c4ea/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2509ca71782f616f6c3e449bb9de96e62ab145f31f634cd4fc644e00bae5c4ea/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-554247",
	                "Source": "/var/lib/docker/volumes/functional-554247/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-554247",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-554247",
	                "name.minikube.sigs.k8s.io": "functional-554247",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "18741beb34dc00bc45906704ee85363fdc6558bbb97affbc3945e9ca5a113eb9",
	            "SandboxKey": "/var/run/docker/netns/18741beb34dc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-554247": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "357ce117968557501ccdbda1f76571fd7b982ee67fd2c6d6effdd2f16be8d757",
	                    "EndpointID": "17160eb36279ff860bf1316a84764719d5f048676969209d0341d8cc9fefc7ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-554247",
	                        "5e87cd275fab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-554247 -n functional-554247
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-554247 logs -n 25: (1.443043579s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-554247 ssh findmnt                                              | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | -T /mount1                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-554247 ssh findmnt                                              | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | -T /mount2                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-554247 ssh findmnt                                              | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | -T /mount3                                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-554247                                                       | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC |                     |
	|                | --kill=true                                                                |                   |         |         |                     |                     |
	| ssh            | functional-554247 ssh sudo cat                                             | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | /etc/test/nested/copy/396125/hosts                                         |                   |         |         |                     |                     |
	| ssh            | functional-554247 ssh sudo                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC |                     |
	|                | systemctl is-active docker                                                 |                   |         |         |                     |                     |
	| ssh            | functional-554247 ssh sudo                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC |                     |
	|                | systemctl is-active containerd                                             |                   |         |         |                     |                     |
	| image          | functional-554247 image load --daemon                                      | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | kicbase/echo-server:functional-554247                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247 image ls                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	| image          | functional-554247 image load --daemon                                      | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | kicbase/echo-server:functional-554247                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247 image ls                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	| image          | functional-554247 image save kicbase/echo-server:functional-554247         | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247 image rm                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | kicbase/echo-server:functional-554247                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247 image ls                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	| image          | functional-554247 image load                                               | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| update-context | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| image          | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-554247 ssh pgrep                                                | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-554247 image build -t                                           | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | localhost/my-image:functional-554247                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-554247 image ls                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 09:01:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 09:01:49.265552  436803 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:01:49.265652  436803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:01:49.265660  436803 out.go:358] Setting ErrFile to fd 2...
	I0917 09:01:49.265664  436803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:01:49.265911  436803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 09:01:49.266421  436803 out.go:352] Setting JSON to false
	I0917 09:01:49.267397  436803 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9858,"bootTime":1726553851,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 09:01:49.267518  436803 start.go:139] virtualization: kvm guest
	I0917 09:01:49.269267  436803 out.go:177] * [functional-554247] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0917 09:01:49.270333  436803 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 09:01:49.270370  436803 notify.go:220] Checking for updates...
	I0917 09:01:49.272498  436803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 09:01:49.274087  436803 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 09:01:49.275190  436803 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	I0917 09:01:49.276335  436803 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 09:01:49.277530  436803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 09:01:49.279224  436803 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 09:01:49.279887  436803 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 09:01:49.302094  436803 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 09:01:49.302170  436803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 09:01:49.349616  436803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 09:01:49.340491586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 09:01:49.349719  436803 docker.go:318] overlay module found
	I0917 09:01:49.351565  436803 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0917 09:01:49.353053  436803 start.go:297] selected driver: docker
	I0917 09:01:49.353072  436803 start.go:901] validating driver "docker" against &{Name:functional-554247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-554247 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 09:01:49.353162  436803 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 09:01:49.355068  436803 out.go:201] 
	W0917 09:01:49.356388  436803 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 09:01:49.357672  436803 out.go:201] 
	
	
	==> CRI-O <==
	Sep 17 09:02:51 functional-554247 crio[4868]: time="2024-09-17 09:02:51.842786231Z" level=info msg="Image localhost/kicbase/echo-server:functional-554247 not found" id=a27214c8-8962-49d2-969f-06393577078a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:02:52 functional-554247 crio[4868]: time="2024-09-17 09:02:52.266166468Z" level=info msg="Checking image status: kicbase/echo-server:functional-554247" id=d5694042-f085-48f1-ac78-1f6ab5981158 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:02:52 functional-554247 crio[4868]: time="2024-09-17 09:02:52.297781277Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-554247" id=2b9026fe-f7b1-40c0-9e5a-93b53960175d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:02:52 functional-554247 crio[4868]: time="2024-09-17 09:02:52.298054084Z" level=info msg="Image docker.io/kicbase/echo-server:functional-554247 not found" id=2b9026fe-f7b1-40c0-9e5a-93b53960175d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:02:52 functional-554247 crio[4868]: time="2024-09-17 09:02:52.330576363Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-554247" id=3667dfdb-540b-486a-b462-5dd09f4c723b name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:02:52 functional-554247 crio[4868]: time="2024-09-17 09:02:52.330775422Z" level=info msg="Image localhost/kicbase/echo-server:functional-554247 not found" id=3667dfdb-540b-486a-b462-5dd09f4c723b name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:02:53 functional-554247 crio[4868]: time="2024-09-17 09:02:53.562194886Z" level=info msg="Checking image status: kicbase/echo-server:functional-554247" id=2bf82924-a2c9-4812-a023-a11948fbecc3 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:02:53 functional-554247 crio[4868]: time="2024-09-17 09:02:53.594432784Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-554247" id=d6a0d4df-e6f4-426e-9dc2-b94c257c8fc7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:02:53 functional-554247 crio[4868]: time="2024-09-17 09:02:53.594637695Z" level=info msg="Image docker.io/kicbase/echo-server:functional-554247 not found" id=d6a0d4df-e6f4-426e-9dc2-b94c257c8fc7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:02:53 functional-554247 crio[4868]: time="2024-09-17 09:02:53.626172903Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-554247" id=ffafc1dc-8595-4c84-b00c-fee43e88bfb0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:02:53 functional-554247 crio[4868]: time="2024-09-17 09:02:53.626444933Z" level=info msg="Image localhost/kicbase/echo-server:functional-554247 not found" id=ffafc1dc-8595-4c84-b00c-fee43e88bfb0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:02:57 functional-554247 crio[4868]: time="2024-09-17 09:02:57.149520816Z" level=info msg="Checking image status: docker.io/nginx:latest" id=94ec43d2-0ae5-475a-ba47-243ffae019ff name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:02:57 functional-554247 crio[4868]: time="2024-09-17 09:02:57.150525714Z" level=info msg="Image docker.io/nginx:latest not found" id=94ec43d2-0ae5-475a-ba47-243ffae019ff name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:03:21 functional-554247 crio[4868]: time="2024-09-17 09:03:21.949176889Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=ddb71044-2132-4089-8a56-f0a71440e945 name=/runtime.v1.ImageService/PullImage
	Sep 17 09:03:21 functional-554247 crio[4868]: time="2024-09-17 09:03:21.950541704Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Sep 17 09:03:33 functional-554247 crio[4868]: time="2024-09-17 09:03:33.149518590Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=4d8ac705-39c0-4f51-8372-dee2edd16c55 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:03:33 functional-554247 crio[4868]: time="2024-09-17 09:03:33.149798920Z" level=info msg="Image docker.io/nginx:alpine not found" id=4d8ac705-39c0-4f51-8372-dee2edd16c55 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:03:44 functional-554247 crio[4868]: time="2024-09-17 09:03:44.149690060Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=1636e4b8-c295-4aac-b047-10812d13a389 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:03:44 functional-554247 crio[4868]: time="2024-09-17 09:03:44.149905109Z" level=info msg="Image docker.io/nginx:alpine not found" id=1636e4b8-c295-4aac-b047-10812d13a389 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:03:52 functional-554247 crio[4868]: time="2024-09-17 09:03:52.696811605Z" level=info msg="Pulling image: docker.io/nginx:latest" id=81465823-0a01-4d19-90e3-6d9a4d6dd794 name=/runtime.v1.ImageService/PullImage
	Sep 17 09:03:52 functional-554247 crio[4868]: time="2024-09-17 09:03:52.718443444Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 17 09:03:53 functional-554247 crio[4868]: time="2024-09-17 09:03:53.692435308Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=f9feaa0f-3b7e-4a73-b21a-e9fbb8f40d21 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:03:53 functional-554247 crio[4868]: time="2024-09-17 09:03:53.692654127Z" level=info msg="Image docker.io/mysql:5.7 not found" id=f9feaa0f-3b7e-4a73-b21a-e9fbb8f40d21 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:04:07 functional-554247 crio[4868]: time="2024-09-17 09:04:07.149116486Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=e205e02b-767e-49fa-8a71-87b6a765ba63 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:04:07 functional-554247 crio[4868]: time="2024-09-17 09:04:07.149387484Z" level=info msg="Image docker.io/mysql:5.7 not found" id=e205e02b-767e-49fa-8a71-87b6a765ba63 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	70ba351bce8d6       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   About a minute ago   Running             dashboard-metrics-scraper   0                   e3a52c54cdfe7       dashboard-metrics-scraper-c5db448b4-7wq4v
	7523174d7cb64       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   ff901b8db23dc       kubernetes-dashboard-695b96c756-62nb7
	ba6c6a00dbd5b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              About a minute ago   Exited              mount-munger                0                   2cc10350ff80a       busybox-mount
	3a61c24c3ea64       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               2 minutes ago        Running             echoserver                  0                   d8a68ab52c40b       hello-node-connect-67bdd5bbb4-88g45
	3a22d12c771ed       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago        Running             echoserver                  0                   1635d2c639df2       hello-node-6b9f76b5c7-ppbl5
	6d7db86928896       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago        Running             coredns                     2                   c2de744cea86a       coredns-7c65d6cfc9-nhqsh
	b71a8e3d36abd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 3 minutes ago        Running             kindnet-cni                 2                   d8a71df41e4a8       kindnet-4t2bx
	631261ef70bce       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 3 minutes ago        Running             kube-proxy                  2                   3ee5c9a33928e       kube-proxy-xgcnn
	b50d740033cc1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago        Running             storage-provisioner         3                   bb8df818b9314       storage-provisioner
	9acce25ee3c23       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 3 minutes ago        Running             kube-apiserver              0                   d3eecf64eee51       kube-apiserver-functional-554247
	2c4e96f556f54       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 3 minutes ago        Running             etcd                        2                   c5b2586d69b44       etcd-functional-554247
	817386fbbdd57       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 3 minutes ago        Running             kube-scheduler              2                   84ccf19f37c55       kube-scheduler-functional-554247
	99c0e7fc99bcb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 3 minutes ago        Running             kube-controller-manager     2                   f9a06966eb092       kube-controller-manager-functional-554247
	84ddaccf5c43e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago        Exited              storage-provisioner         2                   bb8df818b9314       storage-provisioner
	819497f815d23       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 4 minutes ago        Exited              kube-controller-manager     1                   f9a06966eb092       kube-controller-manager-functional-554247
	f1e5c7d1c3b1c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 4 minutes ago        Exited              kube-scheduler              1                   84ccf19f37c55       kube-scheduler-functional-554247
	9b8699a7acafe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 4 minutes ago        Exited              etcd                        1                   c5b2586d69b44       etcd-functional-554247
	b9dd53e511cea       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 4 minutes ago        Exited              kindnet-cni                 1                   d8a71df41e4a8       kindnet-4t2bx
	7d35b25f87d37       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 4 minutes ago        Exited              kube-proxy                  1                   3ee5c9a33928e       kube-proxy-xgcnn
	b0381caceb7ee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago        Exited              coredns                     1                   c2de744cea86a       coredns-7c65d6cfc9-nhqsh
	
	
	==> coredns [6d7db869288963994665df7ee939a4a16a20d2b8a8d84dc4531cc3ddb8d72334] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37814 - 27439 "HINFO IN 2288429965698402741.408185860118899513. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.035631555s
	
	
	==> coredns [b0381caceb7eed748d4d6bc4f791f55b567bb05058cbcb0130d7f5472e2d0dbf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55363 - 1631 "HINFO IN 5850368150904606857.163376773538314082. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017966551s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-554247
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-554247
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=functional-554247
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T08_59_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 08:59:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-554247
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:04:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:03:15 +0000   Tue, 17 Sep 2024 08:59:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:03:15 +0000   Tue, 17 Sep 2024 08:59:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:03:15 +0000   Tue, 17 Sep 2024 08:59:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:03:15 +0000   Tue, 17 Sep 2024 08:59:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-554247
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9b62ec4e6aa449aa34d48161c7f6269
	  System UUID:                4d7e9c0c-2060-4f31-a201-43b75ffa8977
	  Boot ID:                    8c59a26b-5d0c-4753-9e88-ef03399e569b
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-ppbl5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     hello-node-connect-67bdd5bbb4-88g45          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  default                     mysql-6cdb49bbb-c9mm4                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     85s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-7c65d6cfc9-nhqsh                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m1s
	  kube-system                 etcd-functional-554247                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m7s
	  kube-system                 kindnet-4t2bx                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m1s
	  kube-system                 kube-apiserver-functional-554247             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kube-controller-manager-functional-554247    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-proxy-xgcnn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-scheduler-functional-554247             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-7wq4v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-62nb7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m                     kube-proxy       
	  Normal   Starting                 3m31s                  kube-proxy       
	  Normal   Starting                 4m4s                   kube-proxy       
	  Warning  CgroupV1                 5m7s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 5m7s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m6s                   kubelet          Node functional-554247 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m6s                   kubelet          Node functional-554247 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m6s                   kubelet          Node functional-554247 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m2s                   node-controller  Node functional-554247 event: Registered Node functional-554247 in Controller
	  Normal   NodeReady                4m20s                  kubelet          Node functional-554247 status is now: NodeReady
	  Normal   RegisteredNode           4m2s                   node-controller  Node functional-554247 event: Registered Node functional-554247 in Controller
	  Normal   NodeHasSufficientMemory  3m36s (x8 over 3m36s)  kubelet          Node functional-554247 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 3m36s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 3m36s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    3m36s (x8 over 3m36s)  kubelet          Node functional-554247 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m36s (x7 over 3m36s)  kubelet          Node functional-554247 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m30s                  node-controller  Node functional-554247 event: Registered Node functional-554247 in Controller
	
	
	==> dmesg <==
	[  +0.000405] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a b6 29 69 41 ca 08 06
	[ +18.455196] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 92 00 b0 ac cb 10 08 06
	[  +0.102770] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 8d 84 a2 25 2e 08 06
	[ +10.887970] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev cni0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[  +0.094820] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[Sep17 08:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 14 a2 f8 f7 06 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[ +21.407596] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 7a 9f 11 c8 01 08 06
	[  +0.000366] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 22 8d 84 a2 25 2e 08 06
	[Sep17 09:02] FS-Cache: Duplicate cookie detected
	[  +0.004708] FS-Cache: O-cookie c=00000024 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006922] FS-Cache: O-cookie d=000000008db191f9{9P.session} n=000000003e7e7568
	[  +0.007550] FS-Cache: O-key=[10] '34323937333731353734'
	[  +0.005406] FS-Cache: N-cookie c=00000025 [p=00000002 fl=2 nc=0 na=1]
	[  +0.007945] FS-Cache: N-cookie d=000000008db191f9{9P.session} n=000000004b22d92d
	[  +0.008909] FS-Cache: N-key=[10] '34323937333731353734'
	
	
	==> etcd [2c4e96f556f545680b87cb303473bf01075a97a1211c9dca94b0562734702e1c] <==
	{"level":"info","ts":"2024-09-17T09:00:40.055495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-17T09:00:40.055599Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-17T09:00:40.055759Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T09:00:40.055802Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T09:00:40.058386Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T09:00:40.059924Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T09:00:40.060023Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T09:00:40.058492Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T09:00:40.060139Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T09:00:41.346800Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:41.346853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:41.346887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:41.346900Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-09-17T09:00:41.346905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-17T09:00:41.346914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-09-17T09:00:41.346933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-17T09:00:41.349465Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-554247 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T09:00:41.349471Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:00:41.349468Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:00:41.349684Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T09:00:41.349724Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T09:00:41.350480Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T09:00:41.350698Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T09:00:41.351275Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-17T09:00:41.351440Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [9b8699a7acafe5ddb88e23b415a6e31ab0deece588048f0144a8e3e3437d9ff9] <==
	{"level":"info","ts":"2024-09-17T09:00:08.834976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:00:08.835014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-17T09:00:08.835031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:08.835037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:08.835055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:08.835063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:08.836500Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-554247 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T09:00:08.836505Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:00:08.836541Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:00:08.836706Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T09:00:08.836737Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T09:00:08.837597Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T09:00:08.837762Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T09:00:08.838407Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-17T09:00:08.838873Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T09:00:30.061563Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-17T09:00:30.061645Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-554247","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-17T09:00:30.061735Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T09:00:30.061852Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T09:00:30.072850Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T09:00:30.072888Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T09:00:30.072950Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-17T09:00:30.075786Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T09:00:30.076045Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T09:00:30.076071Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-554247","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:04:15 up  2:46,  0 users,  load average: 0.28, 0.33, 0.46
	Linux functional-554247 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b71a8e3d36abdd0a66d7af28f4eccebf0a80562cc06cb7fc23b53a76b238d6a6] <==
	I0917 09:02:14.162219       1 main.go:299] handling current node
	I0917 09:02:24.168027       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:02:24.168062       1 main.go:299] handling current node
	I0917 09:02:34.170415       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:02:34.170464       1 main.go:299] handling current node
	I0917 09:02:44.160806       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:02:44.160844       1 main.go:299] handling current node
	I0917 09:02:54.161087       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:02:54.161131       1 main.go:299] handling current node
	I0917 09:03:04.164021       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:03:04.164064       1 main.go:299] handling current node
	I0917 09:03:14.170118       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:03:14.170157       1 main.go:299] handling current node
	I0917 09:03:24.162114       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:03:24.162163       1 main.go:299] handling current node
	I0917 09:03:34.160995       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:03:34.161033       1 main.go:299] handling current node
	I0917 09:03:44.160906       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:03:44.160937       1 main.go:299] handling current node
	I0917 09:03:54.161039       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:03:54.161082       1 main.go:299] handling current node
	I0917 09:04:04.169692       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:04:04.169732       1 main.go:299] handling current node
	I0917 09:04:14.164078       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:04:14.164111       1 main.go:299] handling current node
	
	
	==> kindnet [b9dd53e511cea54034a852ea971e86031d49172b916874160edcb2eee2d9df0d] <==
	I0917 09:00:07.547069       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0917 09:00:07.633997       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0917 09:00:07.634349       1 main.go:148] setting mtu 1500 for CNI 
	I0917 09:00:07.634369       1 main.go:178] kindnetd IP family: "ipv4"
	I0917 09:00:07.634393       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0917 09:00:08.048984       1 controller.go:334] Starting controller kube-network-policies
	I0917 09:00:08.049007       1 controller.go:338] Waiting for informer caches to sync
	I0917 09:00:08.049013       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0917 09:00:10.349754       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0917 09:00:10.350561       1 metrics.go:61] Registering metrics
	I0917 09:00:10.350643       1 controller.go:374] Syncing nftables rules
	I0917 09:00:18.049390       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:00:18.049458       1 main.go:299] handling current node
	I0917 09:00:28.052059       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:00:28.052103       1 main.go:299] handling current node
	
	
	==> kube-apiserver [9acce25ee3c23e71b64a44e6a58dde20583303110de6f1aab7708c619121534d] <==
	I0917 09:00:42.432153       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 09:00:42.432161       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 09:00:42.437449       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 09:00:42.437477       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 09:00:42.437487       1 policy_source.go:224] refreshing policies
	I0917 09:00:42.442456       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0917 09:00:42.444622       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 09:00:42.446316       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 09:00:43.283248       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 09:00:44.080049       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 09:00:44.174200       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0917 09:00:44.185752       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0917 09:00:44.263026       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 09:00:44.270853       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 09:00:46.055280       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 09:00:46.106499       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 09:01:03.442426       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.92.135"}
	I0917 09:01:07.430765       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0917 09:01:07.535210       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.112.41"}
	I0917 09:01:08.361671       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.232.82"}
	I0917 09:01:09.558504       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.225.196"}
	I0917 09:01:51.775398       1 controller.go:615] quota admission added evaluator for: namespaces
	I0917 09:01:51.954610       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.97.78"}
	I0917 09:01:51.968013       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.251.169"}
	I0917 09:02:50.281669       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.110.139.222"}
	
	
	==> kube-controller-manager [819497f815d23d465ea3f03fde7044755e4631503615595821fee9ce1c607d10] <==
	I0917 09:00:13.642821       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0917 09:00:13.644040       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0917 09:00:13.652666       1 shared_informer.go:320] Caches are synced for persistent volume
	I0917 09:00:13.668947       1 shared_informer.go:320] Caches are synced for TTL
	I0917 09:00:13.674184       1 shared_informer.go:320] Caches are synced for GC
	I0917 09:00:13.683430       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0917 09:00:13.743349       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0917 09:00:13.743379       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0917 09:00:13.743341       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0917 09:00:13.743460       1 shared_informer.go:320] Caches are synced for taint
	I0917 09:00:13.743548       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 09:00:13.743599       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0917 09:00:13.743711       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-554247"
	I0917 09:00:13.743789       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 09:00:13.749630       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 09:00:13.751200       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="203.952976ms"
	I0917 09:00:13.751451       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="106.262µs"
	I0917 09:00:13.792166       1 shared_informer.go:320] Caches are synced for daemon sets
	I0917 09:00:13.793327       1 shared_informer.go:320] Caches are synced for attach detach
	I0917 09:00:13.796760       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 09:00:14.208554       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 09:00:14.241996       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 09:00:14.242037       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 09:00:15.975213       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.802965ms"
	I0917 09:00:15.975322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.678µs"
	
	
	==> kube-controller-manager [99c0e7fc99bcbef9942248e7f6feb45d180fe99c60317536d65c9007d7a34e25] <==
	I0917 09:01:51.837140       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.485532ms"
	E0917 09:01:51.837171       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 09:01:51.837540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="17.997195ms"
	E0917 09:01:51.837580       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 09:01:51.842502       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.282186ms"
	E0917 09:01:51.842538       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 09:01:51.844652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="5.774393ms"
	E0917 09:01:51.844686       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 09:01:51.860919       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="14.518763ms"
	I0917 09:01:51.861302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="12.936601ms"
	I0917 09:01:51.867997       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.658045ms"
	I0917 09:01:51.868085       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="47.036µs"
	I0917 09:01:51.941325       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="80.355507ms"
	I0917 09:01:51.941422       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="51.364µs"
	I0917 09:01:51.944150       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="54.719µs"
	I0917 09:02:49.567705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.053485ms"
	I0917 09:02:49.567855       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="61.277µs"
	I0917 09:02:50.332377       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="12.593139ms"
	I0917 09:02:50.341674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="9.244874ms"
	I0917 09:02:50.341752       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="43.102µs"
	I0917 09:02:51.574614       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.449075ms"
	I0917 09:02:51.574711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="56.99µs"
	I0917 09:03:15.846613       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-554247"
	I0917 09:03:53.702101       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="89.035µs"
	I0917 09:04:07.158729       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="75.457µs"
	
	
	==> kube-proxy [631261ef70bce64aa5fef430ffd56e7a30334a957598c4e31e4d354dba6e0135] <==
	I0917 09:00:43.672303       1 server_linux.go:66] "Using iptables proxy"
	I0917 09:00:43.791726       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 09:00:43.791816       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:00:43.812762       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 09:00:43.812816       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:00:43.814870       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:00:43.815385       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:00:43.815486       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:43.817051       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:00:43.817113       1 config.go:199] "Starting service config controller"
	I0917 09:00:43.817152       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:00:43.817154       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:00:43.817078       1 config.go:328] "Starting node config controller"
	I0917 09:00:43.817228       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:00:43.917346       1 shared_informer.go:320] Caches are synced for node config
	I0917 09:00:43.917375       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:00:43.917385       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [7d35b25f87d37509ac35c68b3babbbcd700161e9f5a8435ccb9471748027c631] <==
	I0917 09:00:07.737234       1 server_linux.go:66] "Using iptables proxy"
	I0917 09:00:10.334694       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 09:00:10.334888       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:00:10.446984       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 09:00:10.447067       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:00:10.449498       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:00:10.449875       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:00:10.449980       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:10.451057       1 config.go:328] "Starting node config controller"
	I0917 09:00:10.451093       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:00:10.451360       1 config.go:199] "Starting service config controller"
	I0917 09:00:10.451377       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:00:10.451392       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:00:10.451395       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:00:10.551203       1 shared_informer.go:320] Caches are synced for node config
	I0917 09:00:10.552401       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:00:10.552422       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [817386fbbdd57700a8cfaa3b1455d8ef900913b86eb4646e5c1146316da65fb0] <==
	I0917 09:00:40.710059       1 serving.go:386] Generated self-signed cert in-memory
	W0917 09:00:42.342218       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 09:00:42.342358       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 09:00:42.342428       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 09:00:42.342470       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 09:00:42.442127       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 09:00:42.442169       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:42.444604       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 09:00:42.444664       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 09:00:42.444965       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 09:00:42.445033       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 09:00:42.544897       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f1e5c7d1c3b1ca2d4998ac6f80b9a7f4bc3d3efccd20761f59ff3182f565484e] <==
	I0917 09:00:08.437419       1 serving.go:386] Generated self-signed cert in-memory
	W0917 09:00:10.166322       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 09:00:10.166459       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 09:00:10.166536       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 09:00:10.166574       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 09:00:10.256964       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 09:00:10.256990       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:10.258877       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 09:00:10.258988       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 09:00:10.259010       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 09:00:10.259023       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 09:00:10.359434       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 09:00:30.062320       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0917 09:00:30.062424       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 09:00:30.062795       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 09:00:30.063052       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 09:02:59 functional-554247 kubelet[5232]: E0917 09:02:59.287760    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563779287516565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:03:09 functional-554247 kubelet[5232]: E0917 09:03:09.289540    5232 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563789289355632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:03:09 functional-554247 kubelet[5232]: E0917 09:03:09.289581    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563789289355632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:03:19 functional-554247 kubelet[5232]: E0917 09:03:19.291133    5232 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563799290930524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:03:19 functional-554247 kubelet[5232]: E0917 09:03:19.291178    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563799290930524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:03:21 functional-554247 kubelet[5232]: E0917 09:03:21.948761    5232 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 09:03:21 functional-554247 kubelet[5232]: E0917 09:03:21.948827    5232 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 09:03:21 functional-554247 kubelet[5232]: E0917 09:03:21.949047    5232 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2pq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-sv
c_default(5f1c337d-5fd3-4ab4-ae51-15f07b6c4699): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 09:03:21 functional-554247 kubelet[5232]: E0917 09:03:21.950317    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="5f1c337d-5fd3-4ab4-ae51-15f07b6c4699"
	Sep 17 09:03:29 functional-554247 kubelet[5232]: E0917 09:03:29.292692    5232 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563809292503928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:03:29 functional-554247 kubelet[5232]: E0917 09:03:29.292740    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563809292503928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:03:33 functional-554247 kubelet[5232]: E0917 09:03:33.150041    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="5f1c337d-5fd3-4ab4-ae51-15f07b6c4699"
	Sep 17 09:03:39 functional-554247 kubelet[5232]: E0917 09:03:39.293901    5232 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563819293741272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:03:39 functional-554247 kubelet[5232]: E0917 09:03:39.293940    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563819293741272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:03:49 functional-554247 kubelet[5232]: E0917 09:03:49.295649    5232 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563829295478161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:03:49 functional-554247 kubelet[5232]: E0917 09:03:49.295694    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563829295478161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:03:52 functional-554247 kubelet[5232]: E0917 09:03:52.696406    5232 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 17 09:03:52 functional-554247 kubelet[5232]: E0917 09:03:52.696468    5232 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 17 09:03:52 functional-554247 kubelet[5232]: E0917 09:03:52.696748    5232 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q679r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-6cdb49bbb-c9mm4_default(f6b9223b-4497-46b0-a09b-058c305a2544): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 09:03:52 functional-554247 kubelet[5232]: E0917 09:03:52.698023    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-6cdb49bbb-c9mm4" podUID="f6b9223b-4497-46b0-a09b-058c305a2544"
	Sep 17 09:03:53 functional-554247 kubelet[5232]: E0917 09:03:53.692903    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-c9mm4" podUID="f6b9223b-4497-46b0-a09b-058c305a2544"
	Sep 17 09:03:59 functional-554247 kubelet[5232]: E0917 09:03:59.297004    5232 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563839296836860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:03:59 functional-554247 kubelet[5232]: E0917 09:03:59.297045    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563839296836860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:04:09 functional-554247 kubelet[5232]: E0917 09:04:09.298566    5232 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563849298382208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:04:09 functional-554247 kubelet[5232]: E0917 09:04:09.298617    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726563849298382208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [7523174d7cb64faed517830a19141027dca85b0adf57f77e93f038cef0ed43f3] <==
	2024/09/17 09:02:49 Using namespace: kubernetes-dashboard
	2024/09/17 09:02:49 Using in-cluster config to connect to apiserver
	2024/09/17 09:02:49 Using secret token for csrf signing
	2024/09/17 09:02:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/17 09:02:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/17 09:02:49 Successful initial request to the apiserver, version: v1.31.1
	2024/09/17 09:02:49 Generating JWE encryption key
	2024/09/17 09:02:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/17 09:02:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/17 09:02:49 Initializing JWE encryption key from synchronized object
	2024/09/17 09:02:49 Creating in-cluster Sidecar client
	2024/09/17 09:02:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/17 09:02:49 Serving insecurely on HTTP port: 9090
	2024/09/17 09:03:19 Successful request to sidecar
	2024/09/17 09:02:49 Starting overwatch
	
	
	==> storage-provisioner [84ddaccf5c43ecafe414e62ec25a1abc272bbd5333cf93ed6a42a66726fbed8f] <==
	I0917 09:00:19.011450       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 09:00:19.018504       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 09:00:19.018539       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [b50d740033cc1444dbaecbf46b9776563f354d4d392908cf87487c2e6916595e] <==
	I0917 09:00:43.634004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 09:00:43.644788       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 09:00:43.645149       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 09:01:01.041971       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 09:01:01.042044       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"79a1e9f3-3546-4d19-8e79-96909b1e11b1", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-554247_dc7d129b-5998-42d8-87d7-023655701af8 became leader
	I0917 09:01:01.042127       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-554247_dc7d129b-5998-42d8-87d7-023655701af8!
	I0917 09:01:01.142858       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-554247_dc7d129b-5998-42d8-87d7-023655701af8!
	I0917 09:01:13.297294       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0917 09:01:13.297489       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f48cc607-4d95-4067-9573-40aa0e50b85c", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0917 09:01:13.297371       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    2a575eab-91da-45e1-96db-cce226437abc 353 0 2024-09-17 08:59:15 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-17 08:59:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f48cc607-4d95-4067-9573-40aa0e50b85c &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f48cc607-4d95-4067-9573-40aa0e50b85c 710 0 2024-09-17 09:01:13 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-17 09:01:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-17 09:01:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0917 09:01:13.297813       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f48cc607-4d95-4067-9573-40aa0e50b85c" provisioned
	I0917 09:01:13.297838       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0917 09:01:13.297845       1 volume_store.go:212] Trying to save persistentvolume "pvc-f48cc607-4d95-4067-9573-40aa0e50b85c"
	I0917 09:01:13.307027       1 volume_store.go:219] persistentvolume "pvc-f48cc607-4d95-4067-9573-40aa0e50b85c" saved
	I0917 09:01:13.307309       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f48cc607-4d95-4067-9573-40aa0e50b85c", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f48cc607-4d95-4067-9573-40aa0e50b85c
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-554247 -n functional-554247
helpers_test.go:261: (dbg) Run:  kubectl --context functional-554247 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-c9mm4 nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-554247 describe pod busybox-mount mysql-6cdb49bbb-c9mm4 nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-554247 describe pod busybox-mount mysql-6cdb49bbb-c9mm4 nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-554247/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 09:01:22 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://ba6c6a00dbd5b97051096da97016c264a032c823e4ed67030300abcbf89fe676
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 17 Sep 2024 09:02:44 +0000
	      Finished:     Tue, 17 Sep 2024 09:02:44 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v28lv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-v28lv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m53s  default-scheduler  Successfully assigned default/busybox-mount to functional-554247
	  Normal  Pulling    2m53s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     92s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.028s (1m21.282s including waiting). Image size: 4631262 bytes.
	  Normal  Created    92s    kubelet            Created container mount-munger
	  Normal  Started    92s    kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-c9mm4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-554247/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 09:02:50 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q679r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-q679r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age               From               Message
	  ----     ------     ----              ----               -------
	  Normal   Scheduled  85s               default-scheduler  Successfully assigned default/mysql-6cdb49bbb-c9mm4 to functional-554247
	  Warning  Failed     24s               kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     24s               kubelet            Error: ErrImagePull
	  Normal   BackOff    23s               kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     23s               kubelet            Error: ImagePullBackOff
	  Normal   Pulling    9s (x2 over 86s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-554247/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 09:01:08 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k2pq2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k2pq2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m7s                 default-scheduler  Successfully assigned default/nginx-svc to functional-554247
	  Warning  Failed     2m35s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     55s (x2 over 2m35s)  kubelet            Error: ErrImagePull
	  Warning  Failed     55s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    43s (x2 over 2m34s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     43s (x2 over 2m34s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    32s (x3 over 3m8s)   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-554247/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 09:01:13 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bqxxv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-bqxxv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m2s                default-scheduler  Successfully assigned default/sp-pod to functional-554247
	  Warning  Failed     93s                 kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     93s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    93s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     93s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    79s (x2 over 3m2s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
E0917 09:04:16.787263  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (187.98s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-554247 replace --force -f testdata/mysql.yaml
2024/09/17 09:02:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-c9mm4" [f6b9223b-4497-46b0-a09b-058c305a2544] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1799: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1799: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-554247 -n functional-554247
functional_test.go:1799: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2024-09-17 09:12:50.616754478 +0000 UTC m=+2087.898900366
functional_test.go:1799: (dbg) Run:  kubectl --context functional-554247 describe po mysql-6cdb49bbb-c9mm4 -n default
functional_test.go:1799: (dbg) kubectl --context functional-554247 describe po mysql-6cdb49bbb-c9mm4 -n default:
Name:             mysql-6cdb49bbb-c9mm4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-554247/192.168.49.2
Start Time:       Tue, 17 Sep 2024 09:02:50 +0000
Labels:           app=mysql
pod-template-hash=6cdb49bbb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-6cdb49bbb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q679r (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-q679r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-6cdb49bbb-c9mm4 to functional-554247
Normal   Pulling    4m45s (x4 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     4m (x4 over 8m58s)     kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     4m (x4 over 8m58s)     kubelet            Error: ErrImagePull
Normal   BackOff    3m36s (x7 over 8m57s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     3m36s (x7 over 8m57s)  kubelet            Error: ImagePullBackOff
functional_test.go:1799: (dbg) Run:  kubectl --context functional-554247 logs mysql-6cdb49bbb-c9mm4 -n default
functional_test.go:1799: (dbg) Non-zero exit: kubectl --context functional-554247 logs mysql-6cdb49bbb-c9mm4 -n default: exit status 1 (67.059947ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-6cdb49bbb-c9mm4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1799: kubectl --context functional-554247 logs mysql-6cdb49bbb-c9mm4 -n default: exit status 1
functional_test.go:1801: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-554247
helpers_test.go:235: (dbg) docker inspect functional-554247:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e87cd275fab4c9827668c14815faeeba86678c563155e153362cdb491868a2f",
	        "Created": "2024-09-17T08:58:53.568945055Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 423793,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-17T08:58:53.674146945Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/5e87cd275fab4c9827668c14815faeeba86678c563155e153362cdb491868a2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e87cd275fab4c9827668c14815faeeba86678c563155e153362cdb491868a2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e87cd275fab4c9827668c14815faeeba86678c563155e153362cdb491868a2f/hosts",
	        "LogPath": "/var/lib/docker/containers/5e87cd275fab4c9827668c14815faeeba86678c563155e153362cdb491868a2f/5e87cd275fab4c9827668c14815faeeba86678c563155e153362cdb491868a2f-json.log",
	        "Name": "/functional-554247",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-554247:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-554247",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2509ca71782f616f6c3e449bb9de96e62ab145f31f634cd4fc644e00bae5c4ea-init/diff:/var/lib/docker/overlay2/22ea169b69b771958d5aa21d4886a5f67242c32d10a387f6aa1fe4f8feab18cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2509ca71782f616f6c3e449bb9de96e62ab145f31f634cd4fc644e00bae5c4ea/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2509ca71782f616f6c3e449bb9de96e62ab145f31f634cd4fc644e00bae5c4ea/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2509ca71782f616f6c3e449bb9de96e62ab145f31f634cd4fc644e00bae5c4ea/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-554247",
	                "Source": "/var/lib/docker/volumes/functional-554247/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-554247",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-554247",
	                "name.minikube.sigs.k8s.io": "functional-554247",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "18741beb34dc00bc45906704ee85363fdc6558bbb97affbc3945e9ca5a113eb9",
	            "SandboxKey": "/var/run/docker/netns/18741beb34dc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-554247": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "357ce117968557501ccdbda1f76571fd7b982ee67fd2c6d6effdd2f16be8d757",
	                    "EndpointID": "17160eb36279ff860bf1316a84764719d5f048676969209d0341d8cc9fefc7ee",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-554247",
	                        "5e87cd275fab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-554247 -n functional-554247
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-554247 logs -n 25: (1.405703611s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-554247 ssh findmnt                                              | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | -T /mount1                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-554247 ssh findmnt                                              | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | -T /mount2                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-554247 ssh findmnt                                              | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | -T /mount3                                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-554247                                                       | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC |                     |
	|                | --kill=true                                                                |                   |         |         |                     |                     |
	| ssh            | functional-554247 ssh sudo cat                                             | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | /etc/test/nested/copy/396125/hosts                                         |                   |         |         |                     |                     |
	| ssh            | functional-554247 ssh sudo                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC |                     |
	|                | systemctl is-active docker                                                 |                   |         |         |                     |                     |
	| ssh            | functional-554247 ssh sudo                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC |                     |
	|                | systemctl is-active containerd                                             |                   |         |         |                     |                     |
	| image          | functional-554247 image load --daemon                                      | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | kicbase/echo-server:functional-554247                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247 image ls                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	| image          | functional-554247 image load --daemon                                      | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | kicbase/echo-server:functional-554247                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247 image ls                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	| image          | functional-554247 image save kicbase/echo-server:functional-554247         | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247 image rm                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | kicbase/echo-server:functional-554247                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247 image ls                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	| image          | functional-554247 image load                                               | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| update-context | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| image          | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-554247                                                          | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-554247 ssh pgrep                                                | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-554247 image build -t                                           | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|                | localhost/my-image:functional-554247                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-554247 image ls                                                 | functional-554247 | jenkins | v1.34.0 | 17 Sep 24 09:02 UTC | 17 Sep 24 09:02 UTC |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 09:01:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 09:01:49.265552  436803 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:01:49.265652  436803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:01:49.265660  436803 out.go:358] Setting ErrFile to fd 2...
	I0917 09:01:49.265664  436803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:01:49.265911  436803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 09:01:49.266421  436803 out.go:352] Setting JSON to false
	I0917 09:01:49.267397  436803 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9858,"bootTime":1726553851,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 09:01:49.267518  436803 start.go:139] virtualization: kvm guest
	I0917 09:01:49.269267  436803 out.go:177] * [functional-554247] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0917 09:01:49.270333  436803 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 09:01:49.270370  436803 notify.go:220] Checking for updates...
	I0917 09:01:49.272498  436803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 09:01:49.274087  436803 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 09:01:49.275190  436803 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	I0917 09:01:49.276335  436803 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 09:01:49.277530  436803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 09:01:49.279224  436803 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 09:01:49.279887  436803 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 09:01:49.302094  436803 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 09:01:49.302170  436803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 09:01:49.349616  436803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 09:01:49.340491586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 09:01:49.349719  436803 docker.go:318] overlay module found
	I0917 09:01:49.351565  436803 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0917 09:01:49.353053  436803 start.go:297] selected driver: docker
	I0917 09:01:49.353072  436803 start.go:901] validating driver "docker" against &{Name:functional-554247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-554247 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 09:01:49.353162  436803 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 09:01:49.355068  436803 out.go:201] 
	W0917 09:01:49.356388  436803 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 09:01:49.357672  436803 out.go:201] 
	
	
	==> CRI-O <==
	Sep 17 09:11:51 functional-554247 crio[4868]: time="2024-09-17 09:11:51.149294331Z" level=info msg="Image docker.io/nginx:latest not found" id=0baf68d8-ece4-47d2-b918-d62b1288f857 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:11:58 functional-554247 crio[4868]: time="2024-09-17 09:11:58.149644813Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=354f3ebc-3e54-422d-84eb-e5c85a6217f0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:11:58 functional-554247 crio[4868]: time="2024-09-17 09:11:58.149865474Z" level=info msg="Image docker.io/mysql:5.7 not found" id=354f3ebc-3e54-422d-84eb-e5c85a6217f0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:04 functional-554247 crio[4868]: time="2024-09-17 09:12:04.149554553Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=a007deba-92a7-45fb-981a-4ecdac2de2ff name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:04 functional-554247 crio[4868]: time="2024-09-17 09:12:04.149890928Z" level=info msg="Image docker.io/nginx:alpine not found" id=a007deba-92a7-45fb-981a-4ecdac2de2ff name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:04 functional-554247 crio[4868]: time="2024-09-17 09:12:04.150499879Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=fb5f60eb-c059-47e9-8196-89f0d7d03c1d name=/runtime.v1.ImageService/PullImage
	Sep 17 09:12:04 functional-554247 crio[4868]: time="2024-09-17 09:12:04.155371950Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 17 09:12:06 functional-554247 crio[4868]: time="2024-09-17 09:12:06.149990406Z" level=info msg="Checking image status: docker.io/nginx:latest" id=ef360e0e-366f-412c-891e-4d0874d79fc1 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:06 functional-554247 crio[4868]: time="2024-09-17 09:12:06.150273587Z" level=info msg="Image docker.io/nginx:latest not found" id=ef360e0e-366f-412c-891e-4d0874d79fc1 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:13 functional-554247 crio[4868]: time="2024-09-17 09:12:13.149441161Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=cd68aa4d-d21d-4570-bbda-4f3d52dd8101 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:13 functional-554247 crio[4868]: time="2024-09-17 09:12:13.149703877Z" level=info msg="Image docker.io/mysql:5.7 not found" id=cd68aa4d-d21d-4570-bbda-4f3d52dd8101 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:18 functional-554247 crio[4868]: time="2024-09-17 09:12:18.149070561Z" level=info msg="Checking image status: docker.io/nginx:latest" id=845a785a-dbb2-455c-9dd0-f2759c030577 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:18 functional-554247 crio[4868]: time="2024-09-17 09:12:18.149287046Z" level=info msg="Image docker.io/nginx:latest not found" id=845a785a-dbb2-455c-9dd0-f2759c030577 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:24 functional-554247 crio[4868]: time="2024-09-17 09:12:24.149476166Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=ae980c11-9e85-44e8-97ff-85ef3748e33f name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:24 functional-554247 crio[4868]: time="2024-09-17 09:12:24.149715363Z" level=info msg="Image docker.io/mysql:5.7 not found" id=ae980c11-9e85-44e8-97ff-85ef3748e33f name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:30 functional-554247 crio[4868]: time="2024-09-17 09:12:30.149622270Z" level=info msg="Checking image status: docker.io/nginx:latest" id=7590556f-feca-429c-a3dc-b1f31519e5d8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:30 functional-554247 crio[4868]: time="2024-09-17 09:12:30.149860765Z" level=info msg="Image docker.io/nginx:latest not found" id=7590556f-feca-429c-a3dc-b1f31519e5d8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:35 functional-554247 crio[4868]: time="2024-09-17 09:12:35.149374937Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=51bd9d44-3fc9-462d-b98d-7d578670c25d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:35 functional-554247 crio[4868]: time="2024-09-17 09:12:35.149641428Z" level=info msg="Image docker.io/mysql:5.7 not found" id=51bd9d44-3fc9-462d-b98d-7d578670c25d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:44 functional-554247 crio[4868]: time="2024-09-17 09:12:44.149395941Z" level=info msg="Checking image status: docker.io/nginx:latest" id=696ccccd-bb85-4a3e-adb6-d8cc288e803a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:44 functional-554247 crio[4868]: time="2024-09-17 09:12:44.149700249Z" level=info msg="Image docker.io/nginx:latest not found" id=696ccccd-bb85-4a3e-adb6-d8cc288e803a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:46 functional-554247 crio[4868]: time="2024-09-17 09:12:46.149426476Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=d93dad5b-ad8c-49f9-822a-01e55b70e5ff name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:46 functional-554247 crio[4868]: time="2024-09-17 09:12:46.149668896Z" level=info msg="Image docker.io/mysql:5.7 not found" id=d93dad5b-ad8c-49f9-822a-01e55b70e5ff name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:48 functional-554247 crio[4868]: time="2024-09-17 09:12:48.149576917Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=88e898a2-7265-437e-b967-863fe15b85f5 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 09:12:48 functional-554247 crio[4868]: time="2024-09-17 09:12:48.149793759Z" level=info msg="Image docker.io/nginx:alpine not found" id=88e898a2-7265-437e-b967-863fe15b85f5 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	70ba351bce8d6       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   e3a52c54cdfe7       dashboard-metrics-scraper-c5db448b4-7wq4v
	7523174d7cb64       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         10 minutes ago      Running             kubernetes-dashboard        0                   ff901b8db23dc       kubernetes-dashboard-695b96c756-62nb7
	ba6c6a00dbd5b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   2cc10350ff80a       busybox-mount
	3a61c24c3ea64       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               11 minutes ago      Running             echoserver                  0                   d8a68ab52c40b       hello-node-connect-67bdd5bbb4-88g45
	3a22d12c771ed       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               11 minutes ago      Running             echoserver                  0                   1635d2c639df2       hello-node-6b9f76b5c7-ppbl5
	6d7db86928896       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 12 minutes ago      Running             coredns                     2                   c2de744cea86a       coredns-7c65d6cfc9-nhqsh
	b71a8e3d36abd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 12 minutes ago      Running             kindnet-cni                 2                   d8a71df41e4a8       kindnet-4t2bx
	631261ef70bce       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 12 minutes ago      Running             kube-proxy                  2                   3ee5c9a33928e       kube-proxy-xgcnn
	b50d740033cc1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 12 minutes ago      Running             storage-provisioner         3                   bb8df818b9314       storage-provisioner
	9acce25ee3c23       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 12 minutes ago      Running             kube-apiserver              0                   d3eecf64eee51       kube-apiserver-functional-554247
	2c4e96f556f54       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 12 minutes ago      Running             etcd                        2                   c5b2586d69b44       etcd-functional-554247
	817386fbbdd57       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 12 minutes ago      Running             kube-scheduler              2                   84ccf19f37c55       kube-scheduler-functional-554247
	99c0e7fc99bcb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 12 minutes ago      Running             kube-controller-manager     2                   f9a06966eb092       kube-controller-manager-functional-554247
	84ddaccf5c43e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 12 minutes ago      Exited              storage-provisioner         2                   bb8df818b9314       storage-provisioner
	819497f815d23       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 12 minutes ago      Exited              kube-controller-manager     1                   f9a06966eb092       kube-controller-manager-functional-554247
	f1e5c7d1c3b1c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 12 minutes ago      Exited              kube-scheduler              1                   84ccf19f37c55       kube-scheduler-functional-554247
	9b8699a7acafe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 12 minutes ago      Exited              etcd                        1                   c5b2586d69b44       etcd-functional-554247
	b9dd53e511cea       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 12 minutes ago      Exited              kindnet-cni                 1                   d8a71df41e4a8       kindnet-4t2bx
	7d35b25f87d37       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 12 minutes ago      Exited              kube-proxy                  1                   3ee5c9a33928e       kube-proxy-xgcnn
	b0381caceb7ee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 12 minutes ago      Exited              coredns                     1                   c2de744cea86a       coredns-7c65d6cfc9-nhqsh
	
	
	==> coredns [6d7db869288963994665df7ee939a4a16a20d2b8a8d84dc4531cc3ddb8d72334] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37814 - 27439 "HINFO IN 2288429965698402741.408185860118899513. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.035631555s
	
	
	==> coredns [b0381caceb7eed748d4d6bc4f791f55b567bb05058cbcb0130d7f5472e2d0dbf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55363 - 1631 "HINFO IN 5850368150904606857.163376773538314082. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017966551s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-554247
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-554247
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=functional-554247
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T08_59_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 08:59:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-554247
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:12:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:08:22 +0000   Tue, 17 Sep 2024 08:59:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:08:22 +0000   Tue, 17 Sep 2024 08:59:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:08:22 +0000   Tue, 17 Sep 2024 08:59:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:08:22 +0000   Tue, 17 Sep 2024 08:59:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-554247
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9b62ec4e6aa449aa34d48161c7f6269
	  System UUID:                4d7e9c0c-2060-4f31-a201-43b75ffa8977
	  Boot ID:                    8c59a26b-5d0c-4753-9e88-ef03399e569b
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-ppbl5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-67bdd5bbb4-88g45          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     mysql-6cdb49bbb-c9mm4                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-nhqsh                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-functional-554247                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-4t2bx                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-functional-554247             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-functional-554247    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-xgcnn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-554247             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-7wq4v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-62nb7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node functional-554247 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node functional-554247 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node functional-554247 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                node-controller  Node functional-554247 event: Registered Node functional-554247 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-554247 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node functional-554247 event: Registered Node functional-554247 in Controller
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-554247 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-554247 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-554247 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-554247 event: Registered Node functional-554247 in Controller
	
	
	==> dmesg <==
	[  +0.000405] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a b6 29 69 41 ca 08 06
	[ +18.455196] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000011] ll header: 00000000: ff ff ff ff ff ff 92 00 b0 ac cb 10 08 06
	[  +0.102770] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 8d 84 a2 25 2e 08 06
	[ +10.887970] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev cni0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[  +0.094820] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[Sep17 08:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 14 a2 f8 f7 06 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 40 f6 fc cc a2 08 06
	[ +21.407596] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 7a 9f 11 c8 01 08 06
	[  +0.000366] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 22 8d 84 a2 25 2e 08 06
	[Sep17 09:02] FS-Cache: Duplicate cookie detected
	[  +0.004708] FS-Cache: O-cookie c=00000024 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006922] FS-Cache: O-cookie d=000000008db191f9{9P.session} n=000000003e7e7568
	[  +0.007550] FS-Cache: O-key=[10] '34323937333731353734'
	[  +0.005406] FS-Cache: N-cookie c=00000025 [p=00000002 fl=2 nc=0 na=1]
	[  +0.007945] FS-Cache: N-cookie d=000000008db191f9{9P.session} n=000000004b22d92d
	[  +0.008909] FS-Cache: N-key=[10] '34323937333731353734'
	
	
	==> etcd [2c4e96f556f545680b87cb303473bf01075a97a1211c9dca94b0562734702e1c] <==
	{"level":"info","ts":"2024-09-17T09:00:40.055802Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T09:00:40.058386Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T09:00:40.059924Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T09:00:40.060023Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T09:00:40.058492Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T09:00:40.060139Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T09:00:41.346800Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:41.346853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:41.346887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:41.346900Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-09-17T09:00:41.346905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-17T09:00:41.346914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-09-17T09:00:41.346933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-17T09:00:41.349465Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-554247 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T09:00:41.349471Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:00:41.349468Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:00:41.349684Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T09:00:41.349724Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T09:00:41.350480Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T09:00:41.350698Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T09:00:41.351275Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-17T09:00:41.351440Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T09:10:41.370709Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1100}
	{"level":"info","ts":"2024-09-17T09:10:41.390473Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1100,"took":"19.413302ms","hash":3729976655,"current-db-size-bytes":4349952,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":1654784,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-17T09:10:41.390520Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3729976655,"revision":1100,"compact-revision":-1}
	
	
	==> etcd [9b8699a7acafe5ddb88e23b415a6e31ab0deece588048f0144a8e3e3437d9ff9] <==
	{"level":"info","ts":"2024-09-17T09:00:08.834976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:00:08.835014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-17T09:00:08.835031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:08.835037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:08.835055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:08.835063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-17T09:00:08.836500Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-554247 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T09:00:08.836505Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:00:08.836541Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:00:08.836706Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T09:00:08.836737Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T09:00:08.837597Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T09:00:08.837762Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T09:00:08.838407Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-17T09:00:08.838873Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T09:00:30.061563Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-17T09:00:30.061645Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-554247","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-17T09:00:30.061735Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T09:00:30.061852Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T09:00:30.072850Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T09:00:30.072888Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T09:00:30.072950Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-17T09:00:30.075786Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T09:00:30.076045Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T09:00:30.076071Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-554247","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:12:51 up  2:55,  0 users,  load average: 0.16, 0.14, 0.28
	Linux functional-554247 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b71a8e3d36abdd0a66d7af28f4eccebf0a80562cc06cb7fc23b53a76b238d6a6] <==
	I0917 09:10:44.161647       1 main.go:299] handling current node
	I0917 09:10:54.162535       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:10:54.162590       1 main.go:299] handling current node
	I0917 09:11:04.168026       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:11:04.168072       1 main.go:299] handling current node
	I0917 09:11:14.161122       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:11:14.161156       1 main.go:299] handling current node
	I0917 09:11:24.161409       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:11:24.161462       1 main.go:299] handling current node
	I0917 09:11:34.170531       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:11:34.170583       1 main.go:299] handling current node
	I0917 09:11:44.160829       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:11:44.160871       1 main.go:299] handling current node
	I0917 09:11:54.161098       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:11:54.161139       1 main.go:299] handling current node
	I0917 09:12:04.168028       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:12:04.168062       1 main.go:299] handling current node
	I0917 09:12:14.161399       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:12:14.161457       1 main.go:299] handling current node
	I0917 09:12:24.161057       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:12:24.161095       1 main.go:299] handling current node
	I0917 09:12:34.168017       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:12:34.168051       1 main.go:299] handling current node
	I0917 09:12:44.161623       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:12:44.161662       1 main.go:299] handling current node
	
	
	==> kindnet [b9dd53e511cea54034a852ea971e86031d49172b916874160edcb2eee2d9df0d] <==
	I0917 09:00:07.547069       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0917 09:00:07.633997       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0917 09:00:07.634349       1 main.go:148] setting mtu 1500 for CNI 
	I0917 09:00:07.634369       1 main.go:178] kindnetd IP family: "ipv4"
	I0917 09:00:07.634393       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0917 09:00:08.048984       1 controller.go:334] Starting controller kube-network-policies
	I0917 09:00:08.049007       1 controller.go:338] Waiting for informer caches to sync
	I0917 09:00:08.049013       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0917 09:00:10.349754       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0917 09:00:10.350561       1 metrics.go:61] Registering metrics
	I0917 09:00:10.350643       1 controller.go:374] Syncing nftables rules
	I0917 09:00:18.049390       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:00:18.049458       1 main.go:299] handling current node
	I0917 09:00:28.052059       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 09:00:28.052103       1 main.go:299] handling current node
	
	
	==> kube-apiserver [9acce25ee3c23e71b64a44e6a58dde20583303110de6f1aab7708c619121534d] <==
	I0917 09:00:42.432153       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 09:00:42.432161       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 09:00:42.437449       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 09:00:42.437477       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 09:00:42.437487       1 policy_source.go:224] refreshing policies
	I0917 09:00:42.442456       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0917 09:00:42.444622       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 09:00:42.446316       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 09:00:43.283248       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 09:00:44.080049       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 09:00:44.174200       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0917 09:00:44.185752       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0917 09:00:44.263026       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 09:00:44.270853       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 09:00:46.055280       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 09:00:46.106499       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 09:01:03.442426       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.92.135"}
	I0917 09:01:07.430765       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0917 09:01:07.535210       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.112.41"}
	I0917 09:01:08.361671       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.232.82"}
	I0917 09:01:09.558504       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.225.196"}
	I0917 09:01:51.775398       1 controller.go:615] quota admission added evaluator for: namespaces
	I0917 09:01:51.954610       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.97.78"}
	I0917 09:01:51.968013       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.251.169"}
	I0917 09:02:50.281669       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.110.139.222"}
	
	
	==> kube-controller-manager [819497f815d23d465ea3f03fde7044755e4631503615595821fee9ce1c607d10] <==
	I0917 09:00:13.642821       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0917 09:00:13.644040       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0917 09:00:13.652666       1 shared_informer.go:320] Caches are synced for persistent volume
	I0917 09:00:13.668947       1 shared_informer.go:320] Caches are synced for TTL
	I0917 09:00:13.674184       1 shared_informer.go:320] Caches are synced for GC
	I0917 09:00:13.683430       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0917 09:00:13.743349       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0917 09:00:13.743379       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0917 09:00:13.743341       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0917 09:00:13.743460       1 shared_informer.go:320] Caches are synced for taint
	I0917 09:00:13.743548       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 09:00:13.743599       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0917 09:00:13.743711       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-554247"
	I0917 09:00:13.743789       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 09:00:13.749630       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 09:00:13.751200       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="203.952976ms"
	I0917 09:00:13.751451       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="106.262µs"
	I0917 09:00:13.792166       1 shared_informer.go:320] Caches are synced for daemon sets
	I0917 09:00:13.793327       1 shared_informer.go:320] Caches are synced for attach detach
	I0917 09:00:13.796760       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 09:00:14.208554       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 09:00:14.241996       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 09:00:14.242037       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 09:00:15.975213       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.802965ms"
	I0917 09:00:15.975322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.678µs"
	
	
	==> kube-controller-manager [99c0e7fc99bcbef9942248e7f6feb45d180fe99c60317536d65c9007d7a34e25] <==
	I0917 09:01:51.861302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="12.936601ms"
	I0917 09:01:51.867997       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.658045ms"
	I0917 09:01:51.868085       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="47.036µs"
	I0917 09:01:51.941325       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="80.355507ms"
	I0917 09:01:51.941422       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="51.364µs"
	I0917 09:01:51.944150       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="54.719µs"
	I0917 09:02:49.567705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.053485ms"
	I0917 09:02:49.567855       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="61.277µs"
	I0917 09:02:50.332377       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="12.593139ms"
	I0917 09:02:50.341674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="9.244874ms"
	I0917 09:02:50.341752       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="43.102µs"
	I0917 09:02:51.574614       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.449075ms"
	I0917 09:02:51.574711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="56.99µs"
	I0917 09:03:15.846613       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-554247"
	I0917 09:03:53.702101       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="89.035µs"
	I0917 09:04:07.158729       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="75.457µs"
	I0917 09:05:44.161180       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="77.39µs"
	I0917 09:05:56.159064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="99.494µs"
	I0917 09:07:30.158960       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="84.178µs"
	I0917 09:07:41.160440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="72.915µs"
	I0917 09:08:22.140368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-554247"
	I0917 09:09:03.160410       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="110.622µs"
	I0917 09:09:14.160258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="166.497µs"
	I0917 09:11:08.158747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="126.907µs"
	I0917 09:11:21.158005       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="68.002µs"
	
	
	==> kube-proxy [631261ef70bce64aa5fef430ffd56e7a30334a957598c4e31e4d354dba6e0135] <==
	I0917 09:00:43.672303       1 server_linux.go:66] "Using iptables proxy"
	I0917 09:00:43.791726       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 09:00:43.791816       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:00:43.812762       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 09:00:43.812816       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:00:43.814870       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:00:43.815385       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:00:43.815486       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:43.817051       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:00:43.817113       1 config.go:199] "Starting service config controller"
	I0917 09:00:43.817152       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:00:43.817154       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:00:43.817078       1 config.go:328] "Starting node config controller"
	I0917 09:00:43.817228       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:00:43.917346       1 shared_informer.go:320] Caches are synced for node config
	I0917 09:00:43.917375       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:00:43.917385       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [7d35b25f87d37509ac35c68b3babbbcd700161e9f5a8435ccb9471748027c631] <==
	I0917 09:00:07.737234       1 server_linux.go:66] "Using iptables proxy"
	I0917 09:00:10.334694       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 09:00:10.334888       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 09:00:10.446984       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 09:00:10.447067       1 server_linux.go:169] "Using iptables Proxier"
	I0917 09:00:10.449498       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 09:00:10.449875       1 server.go:483] "Version info" version="v1.31.1"
	I0917 09:00:10.449980       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:10.451057       1 config.go:328] "Starting node config controller"
	I0917 09:00:10.451093       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 09:00:10.451360       1 config.go:199] "Starting service config controller"
	I0917 09:00:10.451377       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 09:00:10.451392       1 config.go:105] "Starting endpoint slice config controller"
	I0917 09:00:10.451395       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 09:00:10.551203       1 shared_informer.go:320] Caches are synced for node config
	I0917 09:00:10.552401       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 09:00:10.552422       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [817386fbbdd57700a8cfaa3b1455d8ef900913b86eb4646e5c1146316da65fb0] <==
	I0917 09:00:40.710059       1 serving.go:386] Generated self-signed cert in-memory
	W0917 09:00:42.342218       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 09:00:42.342358       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 09:00:42.342428       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 09:00:42.342470       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 09:00:42.442127       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 09:00:42.442169       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:42.444604       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 09:00:42.444664       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 09:00:42.444965       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 09:00:42.445033       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 09:00:42.544897       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f1e5c7d1c3b1ca2d4998ac6f80b9a7f4bc3d3efccd20761f59ff3182f565484e] <==
	I0917 09:00:08.437419       1 serving.go:386] Generated self-signed cert in-memory
	W0917 09:00:10.166322       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 09:00:10.166459       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 09:00:10.166536       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 09:00:10.166574       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 09:00:10.256964       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 09:00:10.256990       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:00:10.258877       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 09:00:10.258988       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 09:00:10.259010       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 09:00:10.259023       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 09:00:10.359434       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 09:00:30.062320       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0917 09:00:30.062424       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 09:00:30.062795       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 09:00:30.063052       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 09:11:59 functional-554247 kubelet[5232]: E0917 09:11:59.374207    5232 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726564319374040835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:11:59 functional-554247 kubelet[5232]: E0917 09:11:59.374238    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726564319374040835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:12:06 functional-554247 kubelet[5232]: E0917 09:12:06.150553    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="23c013f5-68ca-4815-b0e1-e702efdd27de"
	Sep 17 09:12:09 functional-554247 kubelet[5232]: E0917 09:12:09.376086    5232 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726564329375872540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:12:09 functional-554247 kubelet[5232]: E0917 09:12:09.376122    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726564329375872540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:12:13 functional-554247 kubelet[5232]: E0917 09:12:13.149996    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-c9mm4" podUID="f6b9223b-4497-46b0-a09b-058c305a2544"
	Sep 17 09:12:18 functional-554247 kubelet[5232]: E0917 09:12:18.149548    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="23c013f5-68ca-4815-b0e1-e702efdd27de"
	Sep 17 09:12:19 functional-554247 kubelet[5232]: E0917 09:12:19.377934    5232 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726564339377773955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:12:19 functional-554247 kubelet[5232]: E0917 09:12:19.377966    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726564339377773955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:12:24 functional-554247 kubelet[5232]: E0917 09:12:24.149980    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-c9mm4" podUID="f6b9223b-4497-46b0-a09b-058c305a2544"
	Sep 17 09:12:29 functional-554247 kubelet[5232]: E0917 09:12:29.379750    5232 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726564349379521446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:12:29 functional-554247 kubelet[5232]: E0917 09:12:29.379786    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726564349379521446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:12:30 functional-554247 kubelet[5232]: E0917 09:12:30.150126    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="23c013f5-68ca-4815-b0e1-e702efdd27de"
	Sep 17 09:12:35 functional-554247 kubelet[5232]: E0917 09:12:35.046214    5232 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 09:12:35 functional-554247 kubelet[5232]: E0917 09:12:35.046288    5232 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 09:12:35 functional-554247 kubelet[5232]: E0917 09:12:35.046418    5232 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2pq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-sv
c_default(5f1c337d-5fd3-4ab4-ae51-15f07b6c4699): ErrImagePull: loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 09:12:35 functional-554247 kubelet[5232]: E0917 09:12:35.047641    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="5f1c337d-5fd3-4ab4-ae51-15f07b6c4699"
	Sep 17 09:12:35 functional-554247 kubelet[5232]: E0917 09:12:35.149848    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-c9mm4" podUID="f6b9223b-4497-46b0-a09b-058c305a2544"
	Sep 17 09:12:39 functional-554247 kubelet[5232]: E0917 09:12:39.381883    5232 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726564359381677163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:12:39 functional-554247 kubelet[5232]: E0917 09:12:39.381922    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726564359381677163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:12:44 functional-554247 kubelet[5232]: E0917 09:12:44.149998    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="23c013f5-68ca-4815-b0e1-e702efdd27de"
	Sep 17 09:12:46 functional-554247 kubelet[5232]: E0917 09:12:46.149933    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-c9mm4" podUID="f6b9223b-4497-46b0-a09b-058c305a2544"
	Sep 17 09:12:48 functional-554247 kubelet[5232]: E0917 09:12:48.150081    5232 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="5f1c337d-5fd3-4ab4-ae51-15f07b6c4699"
	Sep 17 09:12:49 functional-554247 kubelet[5232]: E0917 09:12:49.383249    5232 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726564369383058062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 09:12:49 functional-554247 kubelet[5232]: E0917 09:12:49.383283    5232 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726564369383058062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225841,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [7523174d7cb64faed517830a19141027dca85b0adf57f77e93f038cef0ed43f3] <==
	2024/09/17 09:02:49 Using namespace: kubernetes-dashboard
	2024/09/17 09:02:49 Using in-cluster config to connect to apiserver
	2024/09/17 09:02:49 Using secret token for csrf signing
	2024/09/17 09:02:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/17 09:02:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/17 09:02:49 Successful initial request to the apiserver, version: v1.31.1
	2024/09/17 09:02:49 Generating JWE encryption key
	2024/09/17 09:02:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/17 09:02:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/17 09:02:49 Initializing JWE encryption key from synchronized object
	2024/09/17 09:02:49 Creating in-cluster Sidecar client
	2024/09/17 09:02:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/17 09:02:49 Serving insecurely on HTTP port: 9090
	2024/09/17 09:03:19 Successful request to sidecar
	2024/09/17 09:02:49 Starting overwatch
	
	
	==> storage-provisioner [84ddaccf5c43ecafe414e62ec25a1abc272bbd5333cf93ed6a42a66726fbed8f] <==
	I0917 09:00:19.011450       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 09:00:19.018504       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 09:00:19.018539       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [b50d740033cc1444dbaecbf46b9776563f354d4d392908cf87487c2e6916595e] <==
	I0917 09:00:43.634004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 09:00:43.644788       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 09:00:43.645149       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 09:01:01.041971       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 09:01:01.042044       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"79a1e9f3-3546-4d19-8e79-96909b1e11b1", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-554247_dc7d129b-5998-42d8-87d7-023655701af8 became leader
	I0917 09:01:01.042127       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-554247_dc7d129b-5998-42d8-87d7-023655701af8!
	I0917 09:01:01.142858       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-554247_dc7d129b-5998-42d8-87d7-023655701af8!
	I0917 09:01:13.297294       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0917 09:01:13.297489       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f48cc607-4d95-4067-9573-40aa0e50b85c", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0917 09:01:13.297371       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    2a575eab-91da-45e1-96db-cce226437abc 353 0 2024-09-17 08:59:15 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-17 08:59:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f48cc607-4d95-4067-9573-40aa0e50b85c &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f48cc607-4d95-4067-9573-40aa0e50b85c 710 0 2024-09-17 09:01:13 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-17 09:01:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-17 09:01:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0917 09:01:13.297813       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f48cc607-4d95-4067-9573-40aa0e50b85c" provisioned
	I0917 09:01:13.297838       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0917 09:01:13.297845       1 volume_store.go:212] Trying to save persistentvolume "pvc-f48cc607-4d95-4067-9573-40aa0e50b85c"
	I0917 09:01:13.307027       1 volume_store.go:219] persistentvolume "pvc-f48cc607-4d95-4067-9573-40aa0e50b85c" saved
	I0917 09:01:13.307309       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f48cc607-4d95-4067-9573-40aa0e50b85c", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f48cc607-4d95-4067-9573-40aa0e50b85c
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-554247 -n functional-554247
helpers_test.go:261: (dbg) Run:  kubectl --context functional-554247 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-c9mm4 nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-554247 describe pod busybox-mount mysql-6cdb49bbb-c9mm4 nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-554247 describe pod busybox-mount mysql-6cdb49bbb-c9mm4 nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-554247/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 09:01:22 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://ba6c6a00dbd5b97051096da97016c264a032c823e4ed67030300abcbf89fe676
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 17 Sep 2024 09:02:44 +0000
	      Finished:     Tue, 17 Sep 2024 09:02:44 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v28lv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-v28lv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11m   default-scheduler  Successfully assigned default/busybox-mount to functional-554247
	  Normal  Pulling    11m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.028s (1m21.282s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-c9mm4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-554247/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 09:02:50 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q679r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-q679r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-6cdb49bbb-c9mm4 to functional-554247
	  Normal   Pulling    4m47s (x4 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4m2s (x4 over 9m)      kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m2s (x4 over 9m)      kubelet            Error: ErrImagePull
	  Normal   BackOff    3m38s (x7 over 8m59s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     3m38s (x7 over 8m59s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-554247/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 09:01:08 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k2pq2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k2pq2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  11m                   default-scheduler  Successfully assigned default/nginx-svc to functional-554247
	  Warning  Failed     11m                   kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m11s (x4 over 11m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6m5s (x4 over 11m)    kubelet            Error: ErrImagePull
	  Warning  Failed     6m5s (x3 over 9m31s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m22s (x7 over 11m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    97s (x18 over 11m)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-554247/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 09:01:13 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bqxxv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-bqxxv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/sp-pod to functional-554247
	  Normal   Pulling    5m43s (x4 over 11m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     4m33s (x4 over 10m)  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m33s (x4 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     4m6s (x7 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    87s (x16 over 10m)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-554247 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5f1c337d-5fd3-4ab4-ae51-15f07b6c4699] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-554247 -n functional-554247
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2024-09-17 09:05:08.669295454 +0000 UTC m=+1625.951441335
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-554247 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-554247 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-554247/192.168.49.2
Start Time:       Tue, 17 Sep 2024 09:01:08 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k2pq2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-k2pq2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-554247
Warning  Failed     3m27s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    95s (x2 over 3m26s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     95s (x2 over 3m26s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    84s (x3 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     8s (x3 over 3m27s)   kubelet            Error: ErrImagePull
Warning  Failed     8s (x2 over 107s)    kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-554247 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-554247 logs nginx-svc -n default: exit status 1 (61.567353ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-554247 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Non-zero exit: docker pull kicbase/echo-server:1.0: exit status 1 (508.48727ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:344: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image load --daemon kicbase/echo-server:functional-554247 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-554247" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image load --daemon kicbase/echo-server:functional-554247 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-554247" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Non-zero exit: docker pull kicbase/echo-server:latest: exit status 1 (506.136079ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:237: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image save kicbase/echo-server:functional-554247 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:411: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0917 09:02:53.890506  440516 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:02:53.890637  440516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:02:53.890647  440516 out.go:358] Setting ErrFile to fd 2...
	I0917 09:02:53.890651  440516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:02:53.890826  440516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 09:02:53.891425  440516 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 09:02:53.891524  440516 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 09:02:53.892016  440516 cli_runner.go:164] Run: docker container inspect functional-554247 --format={{.State.Status}}
	I0917 09:02:53.909373  440516 ssh_runner.go:195] Run: systemctl --version
	I0917 09:02:53.909415  440516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-554247
	I0917 09:02:53.926108  440516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/functional-554247/id_rsa Username:docker}
	I0917 09:02:54.016492  440516 cache_images.go:289] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W0917 09:02:54.016578  440516 cache_images.go:253] Failed to load cached images for "functional-554247": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I0917 09:02:54.016611  440516 cache_images.go:265] failed pushing to: functional-554247

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-554247
functional_test.go:419: (dbg) Non-zero exit: docker rmi kicbase/echo-server:functional-554247: exit status 1 (17.071686ms)

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-554247

                                                
                                                
** /stderr **
functional_test.go:421: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-554247

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (109.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
E0917 09:06:32.926650  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-554247 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.106.232.82   10.106.232.82   80:32557/TCP   5m49s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (109.13s)

                                                
                                    

Test pass (286/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 4.62
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 3.94
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.07
21 TestBinaryMirror 0.75
22 TestOffline 60.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 198.4
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.67
37 TestAddons/parallel/HelmTiller 13.46
40 TestAddons/parallel/Headlamp 16.41
41 TestAddons/parallel/CloudSpanner 5.48
43 TestAddons/parallel/NvidiaDevicePlugin 5.61
44 TestAddons/parallel/Yakd 10.74
45 TestAddons/StoppedEnableDisable 12.06
46 TestCertOptions 27.49
47 TestCertExpiration 221.43
49 TestForceSystemdFlag 24.56
50 TestForceSystemdEnv 23.91
52 TestKVMDriverInstallOrUpdate 3.41
56 TestErrorSpam/setup 23.16
57 TestErrorSpam/start 0.57
58 TestErrorSpam/status 0.86
59 TestErrorSpam/pause 1.51
60 TestErrorSpam/unpause 1.74
61 TestErrorSpam/stop 1.35
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 69.81
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 23.92
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.06
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.09
73 TestFunctional/serial/CacheCmd/cache/add_local 1.39
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 31.76
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.33
84 TestFunctional/serial/LogsFileCmd 1.36
85 TestFunctional/serial/InvalidService 4.06
87 TestFunctional/parallel/ConfigCmd 0.37
88 TestFunctional/parallel/DashboardCmd 59.68
89 TestFunctional/parallel/DryRun 0.32
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.85
95 TestFunctional/parallel/ServiceCmdConnect 39.51
96 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/SSHCmd 0.55
100 TestFunctional/parallel/CpCmd 1.91
102 TestFunctional/parallel/FileSync 0.25
103 TestFunctional/parallel/CertSync 1.47
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
111 TestFunctional/parallel/License 0.22
112 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/ServiceCmd/List 0.47
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
121 TestFunctional/parallel/ServiceCmd/Format 0.32
122 TestFunctional/parallel/ServiceCmd/URL 0.32
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
124 TestFunctional/parallel/ProfileCmd/profile_list 0.33
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
126 TestFunctional/parallel/MountCmd/any-port 85.47
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
130 TestFunctional/parallel/MountCmd/specific-port 1.7
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.45
132 TestFunctional/parallel/Version/short 0.05
133 TestFunctional/parallel/Version/components 0.46
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
138 TestFunctional/parallel/ImageCommands/ImageBuild 1.95
144 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/delete_echo-server_images 0.03
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 150.38
159 TestMultiControlPlane/serial/DeployApp 5.37
160 TestMultiControlPlane/serial/PingHostFromPods 1.02
161 TestMultiControlPlane/serial/AddWorkerNode 29.19
162 TestMultiControlPlane/serial/NodeLabels 0.07
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.65
164 TestMultiControlPlane/serial/CopyFile 15.75
165 TestMultiControlPlane/serial/StopSecondaryNode 12.44
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.48
167 TestMultiControlPlane/serial/RestartSecondaryNode 20.89
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 15.95
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 171.11
170 TestMultiControlPlane/serial/DeleteSecondaryNode 11.27
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.47
172 TestMultiControlPlane/serial/StopCluster 35.51
173 TestMultiControlPlane/serial/RestartCluster 119.71
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.47
175 TestMultiControlPlane/serial/AddSecondaryNode 64.19
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.63
180 TestJSONOutput/start/Command 71.79
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.65
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.58
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.75
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.2
205 TestKicCustomNetwork/create_custom_network 29.03
206 TestKicCustomNetwork/use_default_bridge_network 24.43
207 TestKicExistingNetwork 22.55
208 TestKicCustomSubnet 23.67
209 TestKicStaticIP 26.71
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 47.75
214 TestMountStart/serial/StartWithMountFirst 5.66
215 TestMountStart/serial/VerifyMountFirst 0.24
216 TestMountStart/serial/StartWithMountSecond 8.17
217 TestMountStart/serial/VerifyMountSecond 0.24
218 TestMountStart/serial/DeleteFirst 1.6
219 TestMountStart/serial/VerifyMountPostDelete 0.24
220 TestMountStart/serial/Stop 1.17
221 TestMountStart/serial/RestartStopped 7.2
222 TestMountStart/serial/VerifyMountPostStop 0.24
225 TestMultiNode/serial/FreshStart2Nodes 67.1
226 TestMultiNode/serial/DeployApp2Nodes 3.4
227 TestMultiNode/serial/PingHostFrom2Pods 0.71
228 TestMultiNode/serial/AddNode 27.66
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.29
231 TestMultiNode/serial/CopyFile 8.98
232 TestMultiNode/serial/StopNode 2.09
233 TestMultiNode/serial/StartAfterStop 8.95
234 TestMultiNode/serial/RestartKeepsNodes 100.89
235 TestMultiNode/serial/DeleteNode 5.22
236 TestMultiNode/serial/StopMultiNode 23.67
237 TestMultiNode/serial/RestartMultiNode 52.77
238 TestMultiNode/serial/ValidateNameConflict 23.56
243 TestPreload 102.64
245 TestScheduledStopUnix 95.8
248 TestInsufficientStorage 9.55
249 TestRunningBinaryUpgrade 58.73
251 TestKubernetesUpgrade 328.79
252 TestMissingContainerUpgrade 136.74
253 TestStoppedBinaryUpgrade/Setup 0.44
254 TestStoppedBinaryUpgrade/Upgrade 95.03
255 TestStoppedBinaryUpgrade/MinikubeLogs 0.76
264 TestPause/serial/Start 47.41
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
267 TestNoKubernetes/serial/StartWithK8s 24.83
275 TestNetworkPlugins/group/false 3.42
279 TestNoKubernetes/serial/StartWithStopK8s 5.68
280 TestPause/serial/SecondStartNoReconfiguration 27.97
281 TestNoKubernetes/serial/Start 7.94
282 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
283 TestNoKubernetes/serial/ProfileList 4.63
284 TestNoKubernetes/serial/Stop 1.2
285 TestNoKubernetes/serial/StartNoArgs 6.66
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
287 TestPause/serial/Pause 0.95
288 TestPause/serial/VerifyStatus 0.39
289 TestPause/serial/Unpause 1.05
290 TestPause/serial/PauseAgain 0.78
291 TestPause/serial/DeletePaused 2.87
292 TestPause/serial/VerifyDeletedResources 13.82
294 TestStartStop/group/old-k8s-version/serial/FirstStart 124.27
296 TestStartStop/group/no-preload/serial/FirstStart 53.84
297 TestStartStop/group/no-preload/serial/DeployApp 8.22
298 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
299 TestStartStop/group/no-preload/serial/Stop 11.82
300 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
301 TestStartStop/group/no-preload/serial/SecondStart 261.8
302 TestStartStop/group/old-k8s-version/serial/DeployApp 8.43
304 TestStartStop/group/embed-certs/serial/FirstStart 38.09
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.9
306 TestStartStop/group/old-k8s-version/serial/Stop 12.06
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/old-k8s-version/serial/SecondStart 127.41
309 TestStartStop/group/embed-certs/serial/DeployApp 8.24
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.82
311 TestStartStop/group/embed-certs/serial/Stop 11.99
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
313 TestStartStop/group/embed-certs/serial/SecondStart 262.97
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.95
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.23
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.81
318 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.85
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
320 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.86
321 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
323 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
324 TestStartStop/group/old-k8s-version/serial/Pause 2.53
326 TestStartStop/group/newest-cni/serial/FirstStart 26.68
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.77
329 TestStartStop/group/newest-cni/serial/Stop 1.18
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
331 TestStartStop/group/newest-cni/serial/SecondStart 12.42
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
335 TestStartStop/group/newest-cni/serial/Pause 2.66
336 TestNetworkPlugins/group/auto/Start 38.3
337 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
339 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
340 TestStartStop/group/no-preload/serial/Pause 2.8
341 TestNetworkPlugins/group/auto/KubeletFlags 0.27
342 TestNetworkPlugins/group/auto/NetCatPod 10.21
343 TestNetworkPlugins/group/kindnet/Start 70.84
344 TestNetworkPlugins/group/auto/DNS 0.14
345 TestNetworkPlugins/group/auto/Localhost 0.13
346 TestNetworkPlugins/group/auto/HairPin 0.12
347 TestNetworkPlugins/group/calico/Start 50.39
348 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
349 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
350 TestNetworkPlugins/group/calico/ControllerPod 6.01
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
352 TestNetworkPlugins/group/kindnet/NetCatPod 10.18
353 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.18
354 TestNetworkPlugins/group/calico/KubeletFlags 0.27
355 TestNetworkPlugins/group/calico/NetCatPod 9.18
356 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
357 TestStartStop/group/embed-certs/serial/Pause 2.67
358 TestNetworkPlugins/group/kindnet/DNS 0.15
359 TestNetworkPlugins/group/kindnet/Localhost 0.12
360 TestNetworkPlugins/group/kindnet/HairPin 0.11
361 TestNetworkPlugins/group/calico/DNS 0.13
362 TestNetworkPlugins/group/calico/Localhost 0.12
363 TestNetworkPlugins/group/calico/HairPin 0.11
364 TestNetworkPlugins/group/custom-flannel/Start 50.59
365 TestNetworkPlugins/group/flannel/Start 52.9
366 TestNetworkPlugins/group/bridge/Start 71.37
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.18
369 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
370 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
371 TestNetworkPlugins/group/custom-flannel/DNS 0.12
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
374 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
375 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.58
376 TestNetworkPlugins/group/enable-default-cni/Start 64.69
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
379 TestNetworkPlugins/group/flannel/NetCatPod 10.18
380 TestNetworkPlugins/group/flannel/DNS 0.16
381 TestNetworkPlugins/group/flannel/Localhost 0.12
382 TestNetworkPlugins/group/flannel/HairPin 0.14
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
384 TestNetworkPlugins/group/bridge/NetCatPod 9.21
385 TestNetworkPlugins/group/bridge/DNS 0.14
386 TestNetworkPlugins/group/bridge/Localhost 0.12
387 TestNetworkPlugins/group/bridge/HairPin 0.12
388 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
389 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.19
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (4.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-963544 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-963544 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.624276457s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (4.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-963544
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-963544: exit status 85 (61.373906ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-963544 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |          |
	|         | -p download-only-963544        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 08:38:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 08:38:02.797768  396137 out.go:345] Setting OutFile to fd 1 ...
	I0917 08:38:02.797916  396137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:02.797929  396137 out.go:358] Setting ErrFile to fd 2...
	I0917 08:38:02.797936  396137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:02.798125  396137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	W0917 08:38:02.798258  396137 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19648-389277/.minikube/config/config.json: open /home/jenkins/minikube-integration/19648-389277/.minikube/config/config.json: no such file or directory
	I0917 08:38:02.798834  396137 out.go:352] Setting JSON to true
	I0917 08:38:02.799839  396137 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8432,"bootTime":1726553851,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 08:38:02.799974  396137 start.go:139] virtualization: kvm guest
	I0917 08:38:02.802439  396137 out.go:97] [download-only-963544] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0917 08:38:02.802561  396137 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19648-389277/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 08:38:02.802607  396137 notify.go:220] Checking for updates...
	I0917 08:38:02.803903  396137 out.go:169] MINIKUBE_LOCATION=19648
	I0917 08:38:02.805163  396137 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 08:38:02.806905  396137 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 08:38:02.808205  396137 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	I0917 08:38:02.809476  396137 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0917 08:38:02.811840  396137 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 08:38:02.812094  396137 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 08:38:02.834288  396137 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 08:38:02.834411  396137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:02.881972  396137 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 08:38:02.872545331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:02.882078  396137 docker.go:318] overlay module found
	I0917 08:38:02.883997  396137 out.go:97] Using the docker driver based on user configuration
	I0917 08:38:02.884024  396137 start.go:297] selected driver: docker
	I0917 08:38:02.884031  396137 start.go:901] validating driver "docker" against <nil>
	I0917 08:38:02.884112  396137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:02.929289  396137 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 08:38:02.920573835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:02.929476  396137 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 08:38:02.930011  396137 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0917 08:38:02.930195  396137 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 08:38:02.932141  396137 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-963544 host does not exist
	  To start a cluster, run: "minikube start -p download-only-963544"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-963544
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-223077 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-223077 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.935549441s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-223077
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-223077: exit status 85 (61.364679ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-963544 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | -p download-only-963544        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| delete  | -p download-only-963544        | download-only-963544 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC | 17 Sep 24 08:38 UTC |
	| start   | -o=json --download-only        | download-only-223077 | jenkins | v1.34.0 | 17 Sep 24 08:38 UTC |                     |
	|         | -p download-only-223077        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 08:38:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 08:38:07.811091  396484 out.go:345] Setting OutFile to fd 1 ...
	I0917 08:38:07.811352  396484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:07.811362  396484 out.go:358] Setting ErrFile to fd 2...
	I0917 08:38:07.811368  396484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 08:38:07.811595  396484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 08:38:07.812224  396484 out.go:352] Setting JSON to true
	I0917 08:38:07.813146  396484 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8437,"bootTime":1726553851,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 08:38:07.813251  396484 start.go:139] virtualization: kvm guest
	I0917 08:38:07.815422  396484 out.go:97] [download-only-223077] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 08:38:07.815595  396484 notify.go:220] Checking for updates...
	I0917 08:38:07.816994  396484 out.go:169] MINIKUBE_LOCATION=19648
	I0917 08:38:07.818305  396484 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 08:38:07.819584  396484 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 08:38:07.820860  396484 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	I0917 08:38:07.822027  396484 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0917 08:38:07.824396  396484 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 08:38:07.824615  396484 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 08:38:07.846325  396484 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 08:38:07.846389  396484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:07.895584  396484 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-17 08:38:07.886068172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:07.895740  396484 docker.go:318] overlay module found
	I0917 08:38:07.897604  396484 out.go:97] Using the docker driver based on user configuration
	I0917 08:38:07.897632  396484 start.go:297] selected driver: docker
	I0917 08:38:07.897637  396484 start.go:901] validating driver "docker" against <nil>
	I0917 08:38:07.897722  396484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 08:38:07.948121  396484 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-17 08:38:07.939358286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 08:38:07.948292  396484 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 08:38:07.948828  396484 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0917 08:38:07.948969  396484 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 08:38:07.950769  396484 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-223077 host does not exist
	  To start a cluster, run: "minikube start -p download-only-223077"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-223077
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.07s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-146413 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-146413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-146413
--- PASS: TestDownloadOnlyKic (1.07s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-713061 --alsologtostderr --binary-mirror http://127.0.0.1:45413 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-713061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-713061
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (60.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-769313 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-769313 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (58.274814034s)
helpers_test.go:175: Cleaning up "offline-crio-769313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-769313
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-769313: (2.293758472s)
--- PASS: TestOffline (60.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-093168
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-093168: exit status 85 (51.161309ms)

                                                
                                                
-- stdout --
	* Profile "addons-093168" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-093168"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-093168
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-093168: exit status 85 (49.856685ms)

                                                
                                                
-- stdout --
	* Profile "addons-093168" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-093168"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (198.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-093168 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-093168 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m18.395646203s)
--- PASS: TestAddons/Setup (198.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-093168 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-093168 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.67s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hrsgq" [b5a56cee-091b-476c-aa84-e450bc8f4bb3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004278222s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-093168
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-093168: (5.664739087s)
--- PASS: TestAddons/parallel/InspektorGadget (10.67s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.46s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.174675ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-p6zds" [48ba15f8-54f5-410f-8c46-b15665532417] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.04528081s
addons_test.go:475: (dbg) Run:  kubectl --context addons-093168 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-093168 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.853487935s)
addons_test.go:480: kubectl --context addons-093168 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:475: (dbg) Run:  kubectl --context addons-093168 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-093168 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.569486986s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-093168 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-093168 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-52qt8" [d9554114-ba83-441b-9ddd-32b26e575f18] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-52qt8" [d9554114-ba83-441b-9ddd-32b26e575f18] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.015244602s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-093168 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-093168 addons disable headlamp --alsologtostderr -v=1: (5.655088114s)
--- PASS: TestAddons/parallel/Headlamp (16.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-qhw6c" [971c3980-cc6c-4a71-bf15-30c6e50fd373] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003765236s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-093168
--- PASS: TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fxm5v" [d00acbad-2301-4783-835a-f6133e77a22b] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004205764s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-093168
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-9ztvn" [8068c634-ffac-4bb4-a4bf-24e6fc19dc14] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004276615s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-093168 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-093168 addons disable yakd --alsologtostderr -v=1: (5.739151956s)
--- PASS: TestAddons/parallel/Yakd (10.74s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-093168
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-093168: (11.821082348s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-093168
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-093168
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-093168
--- PASS: TestAddons/StoppedEnableDisable (12.06s)

                                                
                                    
x
+
TestCertOptions (27.49s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-060191 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-060191 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.084138366s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-060191 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-060191 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-060191 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-060191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-060191
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-060191: (3.813210479s)
--- PASS: TestCertOptions (27.49s)

                                                
                                    
x
+
TestCertExpiration (221.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-532586 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-532586 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (23.945328187s)
E0917 09:41:07.543034  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-532586 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-532586 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (15.214771427s)
helpers_test.go:175: Cleaning up "cert-expiration-532586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-532586
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-532586: (2.266750913s)
--- PASS: TestCertExpiration (221.43s)

                                                
                                    
x
+
TestForceSystemdFlag (24.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-144682 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-144682 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.748842331s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-144682 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-144682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-144682
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-144682: (2.531166034s)
--- PASS: TestForceSystemdFlag (24.56s)

                                                
                                    
x
+
TestForceSystemdEnv (23.91s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-164849 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-164849 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.435394422s)
helpers_test.go:175: Cleaning up "force-systemd-env-164849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-164849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-164849: (2.475651769s)
--- PASS: TestForceSystemdEnv (23.91s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.41s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.41s)

                                                
                                    
x
+
TestErrorSpam/setup (23.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-708622 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-708622 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-708622 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-708622 --driver=docker  --container-runtime=crio: (23.156214564s)
--- PASS: TestErrorSpam/setup (23.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 start --dry-run
--- PASS: TestErrorSpam/start (0.57s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 stop: (1.175093762s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-708622 --log_dir /tmp/nospam-708622 stop
--- PASS: TestErrorSpam/stop (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19648-389277/.minikube/files/etc/test/nested/copy/396125/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-554247 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-554247 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.804578386s)
--- PASS: TestFunctional/serial/StartWithProxy (69.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (23.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-554247 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-554247 --alsologtostderr -v=8: (23.916176647s)
functional_test.go:663: soft start took 23.916960319s for "functional-554247" cluster.
--- PASS: TestFunctional/serial/SoftStart (23.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-554247 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-554247 cache add registry.k8s.io/pause:3.3: (1.128281534s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-554247 /tmp/TestFunctionalserialCacheCmdcacheadd_local2243664336/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 cache add minikube-local-cache-test:functional-554247
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-554247 cache add minikube-local-cache-test:functional-554247: (1.061962103s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 cache delete minikube-local-cache-test:functional-554247
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-554247
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554247 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (266.079182ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 kubectl -- --context functional-554247 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-554247 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.76s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-554247 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-554247 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.754916566s)
functional_test.go:761: restart took 31.75506994s for "functional-554247" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.76s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-554247 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-554247 logs: (1.33444696s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 logs --file /tmp/TestFunctionalserialLogsFileCmd267755179/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-554247 logs --file /tmp/TestFunctionalserialLogsFileCmd267755179/001/logs.txt: (1.35563301s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-554247 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-554247
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-554247: exit status 115 (321.775601ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31435 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-554247 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554247 config get cpus: exit status 14 (65.912258ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554247 config get cpus: exit status 14 (69.330345ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (59.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-554247 --alsologtostderr -v=1]
E0917 09:01:53.422469  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:02:13.903894  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-554247 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 437650: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (59.68s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-554247 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-554247 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (137.94902ms)

                                                
                                                
-- stdout --
	* [functional-554247] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 09:01:48.942061  436607 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:01:48.942178  436607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:01:48.942187  436607 out.go:358] Setting ErrFile to fd 2...
	I0917 09:01:48.942191  436607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:01:48.942367  436607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 09:01:48.942892  436607 out.go:352] Setting JSON to false
	I0917 09:01:48.943997  436607 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9858,"bootTime":1726553851,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 09:01:48.944097  436607 start.go:139] virtualization: kvm guest
	I0917 09:01:48.946399  436607 out.go:177] * [functional-554247] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 09:01:48.947751  436607 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 09:01:48.947759  436607 notify.go:220] Checking for updates...
	I0917 09:01:48.950362  436607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 09:01:48.951610  436607 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 09:01:48.952833  436607 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	I0917 09:01:48.954026  436607 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 09:01:48.955169  436607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 09:01:48.956879  436607 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 09:01:48.957304  436607 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 09:01:48.979896  436607 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 09:01:48.979996  436607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 09:01:49.026602  436607 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 09:01:49.017018609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 09:01:49.026704  436607 docker.go:318] overlay module found
	I0917 09:01:49.028559  436607 out.go:177] * Using the docker driver based on existing profile
	I0917 09:01:49.029829  436607 start.go:297] selected driver: docker
	I0917 09:01:49.029841  436607 start.go:901] validating driver "docker" against &{Name:functional-554247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-554247 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 09:01:49.029931  436607 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 09:01:49.031881  436607 out.go:201] 
	W0917 09:01:49.032898  436607 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 09:01:49.034065  436607 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-554247 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-554247 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-554247 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (140.218465ms)

                                                
                                                
-- stdout --
	* [functional-554247] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 09:01:49.265552  436803 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:01:49.265652  436803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:01:49.265660  436803 out.go:358] Setting ErrFile to fd 2...
	I0917 09:01:49.265664  436803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:01:49.265911  436803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 09:01:49.266421  436803 out.go:352] Setting JSON to false
	I0917 09:01:49.267397  436803 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9858,"bootTime":1726553851,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 09:01:49.267518  436803 start.go:139] virtualization: kvm guest
	I0917 09:01:49.269267  436803 out.go:177] * [functional-554247] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0917 09:01:49.270333  436803 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 09:01:49.270370  436803 notify.go:220] Checking for updates...
	I0917 09:01:49.272498  436803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 09:01:49.274087  436803 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 09:01:49.275190  436803 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	I0917 09:01:49.276335  436803 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 09:01:49.277530  436803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 09:01:49.279224  436803 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 09:01:49.279887  436803 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 09:01:49.302094  436803 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 09:01:49.302170  436803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 09:01:49.349616  436803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 09:01:49.340491586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 09:01:49.349719  436803 docker.go:318] overlay module found
	I0917 09:01:49.351565  436803 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0917 09:01:49.353053  436803 start.go:297] selected driver: docker
	I0917 09:01:49.353072  436803 start.go:901] validating driver "docker" against &{Name:functional-554247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-554247 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 09:01:49.353162  436803 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 09:01:49.355068  436803 out.go:201] 
	W0917 09:01:49.356388  436803 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 09:01:49.357672  436803 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (39.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-554247 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-554247 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-88g45" [65d56eab-d631-4d2b-9a97-e73fa2d0df6b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-88g45" [65d56eab-d631-4d2b-9a97-e73fa2d0df6b] Running
E0917 09:01:43.180311  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 39.002809004s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31459
functional_test.go:1675: http://192.168.49.2:31459: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-88g45

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31459
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (39.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh -n functional-554247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 cp functional-554247:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4260227046/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh -n functional-554247 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh -n functional-554247 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/396125/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "sudo cat /etc/test/nested/copy/396125/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/396125.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "sudo cat /etc/ssl/certs/396125.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/396125.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "sudo cat /usr/share/ca-certificates/396125.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3961252.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "sudo cat /etc/ssl/certs/3961252.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3961252.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "sudo cat /usr/share/ca-certificates/3961252.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-554247 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554247 ssh "sudo systemctl is-active docker": exit status 1 (278.752652ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554247 ssh "sudo systemctl is-active containerd": exit status 1 (268.336974ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-554247 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-554247 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-ppbl5" [fac1bd88-3bca-46b7-af5b-0896a27c9890] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-ppbl5" [fac1bd88-3bca-46b7-af5b-0896a27c9890] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003803871s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-554247 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-554247 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-554247 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 433317: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-554247 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-554247 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 service list -o json
functional_test.go:1494: Took "468.613345ms" to run "out/minikube-linux-amd64 -p functional-554247 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32432
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32432
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "284.636596ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "45.61223ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "278.814561ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "46.101958ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (85.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-554247 /tmp/TestFunctionalparallelMountCmdany-port4085194652/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726563681288688853" to /tmp/TestFunctionalparallelMountCmdany-port4085194652/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726563681288688853" to /tmp/TestFunctionalparallelMountCmdany-port4085194652/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726563681288688853" to /tmp/TestFunctionalparallelMountCmdany-port4085194652/001/test-1726563681288688853
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554247 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.303216ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 17 09:01 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 17 09:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 17 09:01 test-1726563681288688853
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh cat /mount-9p/test-1726563681288688853
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-554247 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cd7aec83-19a1-41ed-b1a4-fc4342a3b969] Pending
helpers_test.go:344: "busybox-mount" [cd7aec83-19a1-41ed-b1a4-fc4342a3b969] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0917 09:01:32.926718  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:01:32.933544  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:01:32.944886  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:01:32.966315  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:01:33.007720  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:01:33.089119  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:01:33.250685  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:01:33.572638  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:01:34.214930  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:01:35.496730  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:01:38.058103  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [cd7aec83-19a1-41ed-b1a4-fc4342a3b969] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cd7aec83-19a1-41ed-b1a4-fc4342a3b969] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m23.004467405s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-554247 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-554247 /tmp/TestFunctionalparallelMountCmdany-port4085194652/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (85.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-554247 /tmp/TestFunctionalparallelMountCmdspecific-port3124898211/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554247 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (281.642765ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-554247 /tmp/TestFunctionalparallelMountCmdspecific-port3124898211/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554247 ssh "sudo umount -f /mount-9p": exit status 1 (300.260951ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-554247 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-554247 /tmp/TestFunctionalparallelMountCmdspecific-port3124898211/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-554247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1765592865/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-554247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1765592865/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-554247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1765592865/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554247 ssh "findmnt -T" /mount1: exit status 1 (383.74667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-554247 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-554247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1765592865/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-554247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1765592865/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-554247 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1765592865/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 version -o=json --components
E0917 09:02:54.865379  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-554247 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-554247
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-554247 image ls --format short --alsologtostderr:
I0917 09:02:55.019595  440989 out.go:345] Setting OutFile to fd 1 ...
I0917 09:02:55.019733  440989 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 09:02:55.019744  440989 out.go:358] Setting ErrFile to fd 2...
I0917 09:02:55.019748  440989 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 09:02:55.019915  440989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
I0917 09:02:55.020588  440989 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 09:02:55.020683  440989 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 09:02:55.021054  440989 cli_runner.go:164] Run: docker container inspect functional-554247 --format={{.State.Status}}
I0917 09:02:55.038347  440989 ssh_runner.go:195] Run: systemctl --version
I0917 09:02:55.038397  440989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-554247
I0917 09:02:55.055625  440989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/functional-554247/id_rsa Username:docker}
I0917 09:02:55.148358  440989 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-554247 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-554247  | 35cb538a6aed6 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-554247 image ls --format table --alsologtostderr:
I0917 09:02:55.648957  441144 out.go:345] Setting OutFile to fd 1 ...
I0917 09:02:55.649084  441144 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 09:02:55.649093  441144 out.go:358] Setting ErrFile to fd 2...
I0917 09:02:55.649097  441144 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 09:02:55.649266  441144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
I0917 09:02:55.649928  441144 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 09:02:55.650029  441144 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 09:02:55.650395  441144 cli_runner.go:164] Run: docker container inspect functional-554247 --format={{.State.Status}}
I0917 09:02:55.668022  441144 ssh_runner.go:195] Run: systemctl --version
I0917 09:02:55.668076  441144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-554247
I0917 09:02:55.684765  441144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/functional-554247/id_rsa Username:docker}
I0917 09:02:55.776437  441144 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-554247 image ls --format json --alsologtostderr:
[{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5
a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac
138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e
6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8
s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"35cb538a6aed6c104016e0c399c221b142964779017d52171005b338cce56e13","repoDigests":["localhost/minikube-local-cache-test@sha256:769461b
a2d4584ee3d7864913c011681e5821ee6b94ddcafa7516926e2121633"],"repoTags":["localhost/minikube-local-cache-test:functional-554247"],"size":"3330"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-554247 image ls --format json --alsologtostderr:
I0917 09:02:55.441143  441091 out.go:345] Setting OutFile to fd 1 ...
I0917 09:02:55.441245  441091 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 09:02:55.441253  441091 out.go:358] Setting ErrFile to fd 2...
I0917 09:02:55.441258  441091 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 09:02:55.441456  441091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
I0917 09:02:55.442064  441091 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 09:02:55.442163  441091 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 09:02:55.442515  441091 cli_runner.go:164] Run: docker container inspect functional-554247 --format={{.State.Status}}
I0917 09:02:55.459886  441091 ssh_runner.go:195] Run: systemctl --version
I0917 09:02:55.459932  441091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-554247
I0917 09:02:55.477021  441091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/functional-554247/id_rsa Username:docker}
I0917 09:02:55.568669  441091 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-554247 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 35cb538a6aed6c104016e0c399c221b142964779017d52171005b338cce56e13
repoDigests:
- localhost/minikube-local-cache-test@sha256:769461ba2d4584ee3d7864913c011681e5821ee6b94ddcafa7516926e2121633
repoTags:
- localhost/minikube-local-cache-test:functional-554247
size: "3330"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-554247 image ls --format yaml --alsologtostderr:
I0917 09:02:55.231036  441040 out.go:345] Setting OutFile to fd 1 ...
I0917 09:02:55.231158  441040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 09:02:55.231167  441040 out.go:358] Setting ErrFile to fd 2...
I0917 09:02:55.231171  441040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 09:02:55.231383  441040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
I0917 09:02:55.232059  441040 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 09:02:55.232165  441040 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 09:02:55.232602  441040 cli_runner.go:164] Run: docker container inspect functional-554247 --format={{.State.Status}}
I0917 09:02:55.249751  441040 ssh_runner.go:195] Run: systemctl --version
I0917 09:02:55.249801  441040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-554247
I0917 09:02:55.266570  441040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/functional-554247/id_rsa Username:docker}
I0917 09:02:55.356952  441040 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554247 ssh pgrep buildkitd: exit status 1 (241.395166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image build -t localhost/my-image:functional-554247 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-554247 image build -t localhost/my-image:functional-554247 testdata/build --alsologtostderr: (1.488217848s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-554247 image build -t localhost/my-image:functional-554247 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4f6eabf104b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-554247
--> 788558a5df5
Successfully tagged localhost/my-image:functional-554247
788558a5df534b8430fd69c384a1942be9f3732802f9da773e25a1186016a903
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-554247 image build -t localhost/my-image:functional-554247 testdata/build --alsologtostderr:
I0917 09:02:56.103274  441289 out.go:345] Setting OutFile to fd 1 ...
I0917 09:02:56.103516  441289 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 09:02:56.103524  441289 out.go:358] Setting ErrFile to fd 2...
I0917 09:02:56.103528  441289 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 09:02:56.103717  441289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
I0917 09:02:56.104393  441289 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 09:02:56.104977  441289 config.go:182] Loaded profile config "functional-554247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0917 09:02:56.105361  441289 cli_runner.go:164] Run: docker container inspect functional-554247 --format={{.State.Status}}
I0917 09:02:56.122659  441289 ssh_runner.go:195] Run: systemctl --version
I0917 09:02:56.122709  441289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-554247
I0917 09:02:56.141104  441289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/functional-554247/id_rsa Username:docker}
I0917 09:02:56.232361  441289 build_images.go:161] Building image from path: /tmp/build.1264091466.tar
I0917 09:02:56.232423  441289 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 09:02:56.240866  441289 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1264091466.tar
I0917 09:02:56.244051  441289 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1264091466.tar: stat -c "%s %y" /var/lib/minikube/build/build.1264091466.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1264091466.tar': No such file or directory
I0917 09:02:56.244084  441289 ssh_runner.go:362] scp /tmp/build.1264091466.tar --> /var/lib/minikube/build/build.1264091466.tar (3072 bytes)
I0917 09:02:56.265952  441289 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1264091466
I0917 09:02:56.274155  441289 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1264091466 -xf /var/lib/minikube/build/build.1264091466.tar
I0917 09:02:56.282291  441289 crio.go:315] Building image: /var/lib/minikube/build/build.1264091466
I0917 09:02:56.282351  441289 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-554247 /var/lib/minikube/build/build.1264091466 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0917 09:02:57.525849  441289 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-554247 /var/lib/minikube/build/build.1264091466 --cgroup-manager=cgroupfs: (1.243470008s)
I0917 09:02:57.525922  441289 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1264091466
I0917 09:02:57.534636  441289 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1264091466.tar
I0917 09:02:57.542837  441289 build_images.go:217] Built localhost/my-image:functional-554247 from /tmp/build.1264091466.tar
I0917 09:02:57.542872  441289 build_images.go:133] succeeded building to: functional-554247
I0917 09:02:57.542884  441289 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image rm kicbase/echo-server:functional-554247 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-554247 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-554247 tunnel --alsologtostderr] ...
E0917 09:07:00.629124  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:11:32.926088  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-554247
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-554247
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-554247
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (150.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-826489 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-826489 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m29.702347783s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (150.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-826489 -- rollout status deployment/busybox: (3.524655544s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-7nrm5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-pbngh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-th7jl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-7nrm5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-pbngh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-th7jl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-7nrm5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-pbngh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-th7jl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-7nrm5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-7nrm5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-pbngh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-pbngh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-th7jl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-826489 -- exec busybox-7dff88458-th7jl -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (29.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-826489 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-826489 -v=7 --alsologtostderr: (28.352134654s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (29.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-826489 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp testdata/cp-test.txt ha-826489:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1697603940/001/cp-test_ha-826489.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489:/home/docker/cp-test.txt ha-826489-m02:/home/docker/cp-test_ha-826489_ha-826489-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m02 "sudo cat /home/docker/cp-test_ha-826489_ha-826489-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489:/home/docker/cp-test.txt ha-826489-m03:/home/docker/cp-test_ha-826489_ha-826489-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m03 "sudo cat /home/docker/cp-test_ha-826489_ha-826489-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489:/home/docker/cp-test.txt ha-826489-m04:/home/docker/cp-test_ha-826489_ha-826489-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m04 "sudo cat /home/docker/cp-test_ha-826489_ha-826489-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp testdata/cp-test.txt ha-826489-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1697603940/001/cp-test_ha-826489-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m02 "sudo cat /home/docker/cp-test.txt"
E0917 09:16:07.542554  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:16:07.549005  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:16:07.560377  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:16:07.581777  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:16:07.623163  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489-m02:/home/docker/cp-test.txt ha-826489:/home/docker/cp-test_ha-826489-m02_ha-826489.txt
E0917 09:16:07.704535  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:16:07.865949  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m02 "sudo cat /home/docker/cp-test.txt"
E0917 09:16:08.187484  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489 "sudo cat /home/docker/cp-test_ha-826489-m02_ha-826489.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489-m02:/home/docker/cp-test.txt ha-826489-m03:/home/docker/cp-test_ha-826489-m02_ha-826489-m03.txt
E0917 09:16:08.828889  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m03 "sudo cat /home/docker/cp-test_ha-826489-m02_ha-826489-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489-m02:/home/docker/cp-test.txt ha-826489-m04:/home/docker/cp-test_ha-826489-m02_ha-826489-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m02 "sudo cat /home/docker/cp-test.txt"
E0917 09:16:10.110649  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m04 "sudo cat /home/docker/cp-test_ha-826489-m02_ha-826489-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp testdata/cp-test.txt ha-826489-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1697603940/001/cp-test_ha-826489-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489-m03:/home/docker/cp-test.txt ha-826489:/home/docker/cp-test_ha-826489-m03_ha-826489.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489 "sudo cat /home/docker/cp-test_ha-826489-m03_ha-826489.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489-m03:/home/docker/cp-test.txt ha-826489-m02:/home/docker/cp-test_ha-826489-m03_ha-826489-m02.txt
E0917 09:16:12.672680  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m02 "sudo cat /home/docker/cp-test_ha-826489-m03_ha-826489-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489-m03:/home/docker/cp-test.txt ha-826489-m04:/home/docker/cp-test_ha-826489-m03_ha-826489-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m04 "sudo cat /home/docker/cp-test_ha-826489-m03_ha-826489-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp testdata/cp-test.txt ha-826489-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1697603940/001/cp-test_ha-826489-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489-m04:/home/docker/cp-test.txt ha-826489:/home/docker/cp-test_ha-826489-m04_ha-826489.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489 "sudo cat /home/docker/cp-test_ha-826489-m04_ha-826489.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489-m04:/home/docker/cp-test.txt ha-826489-m02:/home/docker/cp-test_ha-826489-m04_ha-826489-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m02 "sudo cat /home/docker/cp-test_ha-826489-m04_ha-826489-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 cp ha-826489-m04:/home/docker/cp-test.txt ha-826489-m03:/home/docker/cp-test_ha-826489-m04_ha-826489-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 ssh -n ha-826489-m03 "sudo cat /home/docker/cp-test_ha-826489-m04_ha-826489-m03.txt"
E0917 09:16:17.794874  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 node stop m02 -v=7 --alsologtostderr
E0917 09:16:28.036999  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-826489 node stop m02 -v=7 --alsologtostderr: (11.783950118s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-826489 status -v=7 --alsologtostderr: exit status 7 (651.244324ms)

                                                
                                                
-- stdout --
	ha-826489
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-826489-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-826489-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-826489-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 09:16:29.672320  466656 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:16:29.672424  466656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:16:29.672431  466656 out.go:358] Setting ErrFile to fd 2...
	I0917 09:16:29.672436  466656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:16:29.672645  466656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 09:16:29.672828  466656 out.go:352] Setting JSON to false
	I0917 09:16:29.672865  466656 mustload.go:65] Loading cluster: ha-826489
	I0917 09:16:29.672921  466656 notify.go:220] Checking for updates...
	I0917 09:16:29.673279  466656 config.go:182] Loaded profile config "ha-826489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 09:16:29.673294  466656 status.go:255] checking status of ha-826489 ...
	I0917 09:16:29.673711  466656 cli_runner.go:164] Run: docker container inspect ha-826489 --format={{.State.Status}}
	I0917 09:16:29.692194  466656 status.go:330] ha-826489 host status = "Running" (err=<nil>)
	I0917 09:16:29.692242  466656 host.go:66] Checking if "ha-826489" exists ...
	I0917 09:16:29.692628  466656 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-826489
	I0917 09:16:29.710140  466656 host.go:66] Checking if "ha-826489" exists ...
	I0917 09:16:29.710419  466656 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 09:16:29.710469  466656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-826489
	I0917 09:16:29.727287  466656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/ha-826489/id_rsa Username:docker}
	I0917 09:16:29.821767  466656 ssh_runner.go:195] Run: systemctl --version
	I0917 09:16:29.825763  466656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 09:16:29.837080  466656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 09:16:29.887214  466656 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-17 09:16:29.877863745 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 09:16:29.887903  466656 kubeconfig.go:125] found "ha-826489" server: "https://192.168.49.254:8443"
	I0917 09:16:29.887962  466656 api_server.go:166] Checking apiserver status ...
	I0917 09:16:29.888013  466656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 09:16:29.898499  466656 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1479/cgroup
	I0917 09:16:29.906940  466656 api_server.go:182] apiserver freezer: "8:freezer:/docker/3aed7be0594b51e62c6d2e2daf86fb74d50cf6fc2d7af96d2bfa03fe5891b57e/crio/crio-e013894ffb6e4c433c7a0a7e6136d2d5aac9579421006589e954c1f47ac094d4"
	I0917 09:16:29.906992  466656 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3aed7be0594b51e62c6d2e2daf86fb74d50cf6fc2d7af96d2bfa03fe5891b57e/crio/crio-e013894ffb6e4c433c7a0a7e6136d2d5aac9579421006589e954c1f47ac094d4/freezer.state
	I0917 09:16:29.914556  466656 api_server.go:204] freezer state: "THAWED"
	I0917 09:16:29.914592  466656 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 09:16:29.918285  466656 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 09:16:29.918305  466656 status.go:422] ha-826489 apiserver status = Running (err=<nil>)
	I0917 09:16:29.918315  466656 status.go:257] ha-826489 status: &{Name:ha-826489 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 09:16:29.918330  466656 status.go:255] checking status of ha-826489-m02 ...
	I0917 09:16:29.918569  466656 cli_runner.go:164] Run: docker container inspect ha-826489-m02 --format={{.State.Status}}
	I0917 09:16:29.936068  466656 status.go:330] ha-826489-m02 host status = "Stopped" (err=<nil>)
	I0917 09:16:29.936107  466656 status.go:343] host is not running, skipping remaining checks
	I0917 09:16:29.936118  466656 status.go:257] ha-826489-m02 status: &{Name:ha-826489-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 09:16:29.936149  466656 status.go:255] checking status of ha-826489-m03 ...
	I0917 09:16:29.936425  466656 cli_runner.go:164] Run: docker container inspect ha-826489-m03 --format={{.State.Status}}
	I0917 09:16:29.952643  466656 status.go:330] ha-826489-m03 host status = "Running" (err=<nil>)
	I0917 09:16:29.952668  466656 host.go:66] Checking if "ha-826489-m03" exists ...
	I0917 09:16:29.952978  466656 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-826489-m03
	I0917 09:16:29.970653  466656 host.go:66] Checking if "ha-826489-m03" exists ...
	I0917 09:16:29.970916  466656 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 09:16:29.970959  466656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-826489-m03
	I0917 09:16:29.988387  466656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/ha-826489-m03/id_rsa Username:docker}
	I0917 09:16:30.081153  466656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 09:16:30.092123  466656 kubeconfig.go:125] found "ha-826489" server: "https://192.168.49.254:8443"
	I0917 09:16:30.092153  466656 api_server.go:166] Checking apiserver status ...
	I0917 09:16:30.092186  466656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 09:16:30.102307  466656 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1420/cgroup
	I0917 09:16:30.111047  466656 api_server.go:182] apiserver freezer: "8:freezer:/docker/d825f207fcfc3f9701b5e8350c3796d0c23bd1a569961a125b3c05d12af13d9f/crio/crio-3b779d4019dc70911da7e85cd4231c1a59ad0f8f35ba207fbf7253f0c6da6ba1"
	I0917 09:16:30.111108  466656 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d825f207fcfc3f9701b5e8350c3796d0c23bd1a569961a125b3c05d12af13d9f/crio/crio-3b779d4019dc70911da7e85cd4231c1a59ad0f8f35ba207fbf7253f0c6da6ba1/freezer.state
	I0917 09:16:30.118912  466656 api_server.go:204] freezer state: "THAWED"
	I0917 09:16:30.118942  466656 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 09:16:30.122535  466656 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 09:16:30.122562  466656 status.go:422] ha-826489-m03 apiserver status = Running (err=<nil>)
	I0917 09:16:30.122572  466656 status.go:257] ha-826489-m03 status: &{Name:ha-826489-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 09:16:30.122587  466656 status.go:255] checking status of ha-826489-m04 ...
	I0917 09:16:30.122885  466656 cli_runner.go:164] Run: docker container inspect ha-826489-m04 --format={{.State.Status}}
	I0917 09:16:30.139802  466656 status.go:330] ha-826489-m04 host status = "Running" (err=<nil>)
	I0917 09:16:30.139832  466656 host.go:66] Checking if "ha-826489-m04" exists ...
	I0917 09:16:30.140152  466656 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-826489-m04
	I0917 09:16:30.156686  466656 host.go:66] Checking if "ha-826489-m04" exists ...
	I0917 09:16:30.156963  466656 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 09:16:30.157036  466656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-826489-m04
	I0917 09:16:30.174423  466656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/ha-826489-m04/id_rsa Username:docker}
	I0917 09:16:30.265181  466656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 09:16:30.275517  466656 status.go:257] ha-826489-m04 status: &{Name:ha-826489-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 node start m02 -v=7 --alsologtostderr
E0917 09:16:32.926497  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:16:48.519024  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-826489 node start m02 -v=7 --alsologtostderr: (19.713798699s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-826489 status -v=7 --alsologtostderr: (1.10581262s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (15.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.946405127s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (15.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (171.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-826489 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-826489 -v=7 --alsologtostderr
E0917 09:17:29.481198  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-826489 -v=7 --alsologtostderr: (36.868934347s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-826489 --wait=true -v=7 --alsologtostderr
E0917 09:17:55.991088  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:18:51.403014  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-826489 --wait=true -v=7 --alsologtostderr: (2m14.136722684s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-826489
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (171.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-826489 node delete m03 -v=7 --alsologtostderr: (10.504695294s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-826489 stop -v=7 --alsologtostderr: (35.407428948s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-826489 status -v=7 --alsologtostderr: exit status 7 (100.66644ms)

                                                
                                                
-- stdout --
	ha-826489
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-826489-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-826489-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 09:20:45.893264  484165 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:20:45.893380  484165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:20:45.893390  484165 out.go:358] Setting ErrFile to fd 2...
	I0917 09:20:45.893394  484165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:20:45.893577  484165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 09:20:45.893755  484165 out.go:352] Setting JSON to false
	I0917 09:20:45.893794  484165 mustload.go:65] Loading cluster: ha-826489
	I0917 09:20:45.893925  484165 notify.go:220] Checking for updates...
	I0917 09:20:45.894383  484165 config.go:182] Loaded profile config "ha-826489": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 09:20:45.894413  484165 status.go:255] checking status of ha-826489 ...
	I0917 09:20:45.894968  484165 cli_runner.go:164] Run: docker container inspect ha-826489 --format={{.State.Status}}
	I0917 09:20:45.913559  484165 status.go:330] ha-826489 host status = "Stopped" (err=<nil>)
	I0917 09:20:45.913585  484165 status.go:343] host is not running, skipping remaining checks
	I0917 09:20:45.913595  484165 status.go:257] ha-826489 status: &{Name:ha-826489 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 09:20:45.913640  484165 status.go:255] checking status of ha-826489-m02 ...
	I0917 09:20:45.913900  484165 cli_runner.go:164] Run: docker container inspect ha-826489-m02 --format={{.State.Status}}
	I0917 09:20:45.930337  484165 status.go:330] ha-826489-m02 host status = "Stopped" (err=<nil>)
	I0917 09:20:45.930362  484165 status.go:343] host is not running, skipping remaining checks
	I0917 09:20:45.930369  484165 status.go:257] ha-826489-m02 status: &{Name:ha-826489-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 09:20:45.930389  484165 status.go:255] checking status of ha-826489-m04 ...
	I0917 09:20:45.930636  484165 cli_runner.go:164] Run: docker container inspect ha-826489-m04 --format={{.State.Status}}
	I0917 09:20:45.946924  484165 status.go:330] ha-826489-m04 host status = "Stopped" (err=<nil>)
	I0917 09:20:45.946970  484165 status.go:343] host is not running, skipping remaining checks
	I0917 09:20:45.946983  484165 status.go:257] ha-826489-m04 status: &{Name:ha-826489-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (119.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-826489 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0917 09:21:07.543532  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:21:32.926510  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:21:35.245318  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-826489 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m58.940614459s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (119.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (64.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-826489 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-826489 --control-plane -v=7 --alsologtostderr: (1m3.362528773s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-826489 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (64.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (71.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-488050 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-488050 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m11.791175323s)
--- PASS: TestJSONOutput/start/Command (71.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-488050 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-488050 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-488050 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-488050 --output=json --user=testUser: (5.753099726s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-362390 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-362390 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.129978ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"818afb41-7a97-44f9-83fc-597e06988f87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-362390] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d7c1f5f-0b4e-46d4-817a-4d2303aa28e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19648"}}
	{"specversion":"1.0","id":"eb573941-4698-43a2-98f2-494941fa8b53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a8526ace-4fcd-466e-8131-8312b751b532","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig"}}
	{"specversion":"1.0","id":"8d8c53b9-3adc-4967-a908-a512d6e648fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube"}}
	{"specversion":"1.0","id":"4c693c3a-b8b5-4b86-94eb-51d535a0bd1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"438027dc-8bee-4304-ba19-f0cc6a213c55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8b3eb138-3989-4e97-bb79-3758386e85ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-362390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-362390
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-932061 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-932061 --network=: (27.042386469s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-932061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-932061
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-932061: (1.970948999s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.03s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-717053 --network=bridge
E0917 09:26:07.543506  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-717053 --network=bridge: (22.586877029s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-717053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-717053
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-717053: (1.826481884s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.43s)

                                                
                                    
x
+
TestKicExistingNetwork (22.55s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-980849 --network=existing-network
E0917 09:26:32.926406  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-980849 --network=existing-network: (20.533216377s)
helpers_test.go:175: Cleaning up "existing-network-980849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-980849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-980849: (1.872734108s)
--- PASS: TestKicExistingNetwork (22.55s)

                                                
                                    
x
+
TestKicCustomSubnet (23.67s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-077606 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-077606 --subnet=192.168.60.0/24: (21.62221217s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-077606 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-077606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-077606
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-077606: (2.027525179s)
--- PASS: TestKicCustomSubnet (23.67s)

                                                
                                    
x
+
TestKicStaticIP (26.71s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-604485 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-604485 --static-ip=192.168.200.200: (24.556124653s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-604485 ip
helpers_test.go:175: Cleaning up "static-ip-604485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-604485
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-604485: (2.035384203s)
--- PASS: TestKicStaticIP (26.71s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (47.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-277781 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-277781 --driver=docker  --container-runtime=crio: (22.800035426s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-290834 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-290834 --driver=docker  --container-runtime=crio: (20.223601087s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-277781
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-290834
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-290834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-290834
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-290834: (1.831103006s)
helpers_test.go:175: Cleaning up "first-277781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-277781
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-277781: (1.824958145s)
--- PASS: TestMinikubeProfile (47.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-888659 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-888659 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.655569945s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-888659 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-904519 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-904519 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.1718212s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904519 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-888659 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-888659 --alsologtostderr -v=5: (1.600456368s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904519 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-904519
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-904519: (1.17434789s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-904519
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-904519: (6.194860763s)
--- PASS: TestMountStart/serial/RestartStopped (7.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-904519 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-943302 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-943302 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m6.645816855s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-943302 -- rollout status deployment/busybox: (2.026517441s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- exec busybox-7dff88458-9sgvl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- exec busybox-7dff88458-k5xx9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- exec busybox-7dff88458-9sgvl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- exec busybox-7dff88458-k5xx9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- exec busybox-7dff88458-9sgvl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- exec busybox-7dff88458-k5xx9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.40s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- exec busybox-7dff88458-9sgvl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- exec busybox-7dff88458-9sgvl -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- exec busybox-7dff88458-k5xx9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943302 -- exec busybox-7dff88458-k5xx9 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-943302 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-943302 -v 3 --alsologtostderr: (27.065967864s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.66s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-943302 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 cp testdata/cp-test.txt multinode-943302:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 cp multinode-943302:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4259218652/001/cp-test_multinode-943302.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 cp multinode-943302:/home/docker/cp-test.txt multinode-943302-m02:/home/docker/cp-test_multinode-943302_multinode-943302-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302-m02 "sudo cat /home/docker/cp-test_multinode-943302_multinode-943302-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 cp multinode-943302:/home/docker/cp-test.txt multinode-943302-m03:/home/docker/cp-test_multinode-943302_multinode-943302-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302-m03 "sudo cat /home/docker/cp-test_multinode-943302_multinode-943302-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 cp testdata/cp-test.txt multinode-943302-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 cp multinode-943302-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4259218652/001/cp-test_multinode-943302-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 cp multinode-943302-m02:/home/docker/cp-test.txt multinode-943302:/home/docker/cp-test_multinode-943302-m02_multinode-943302.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302 "sudo cat /home/docker/cp-test_multinode-943302-m02_multinode-943302.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 cp multinode-943302-m02:/home/docker/cp-test.txt multinode-943302-m03:/home/docker/cp-test_multinode-943302-m02_multinode-943302-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302-m03 "sudo cat /home/docker/cp-test_multinode-943302-m02_multinode-943302-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 cp testdata/cp-test.txt multinode-943302-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 cp multinode-943302-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4259218652/001/cp-test_multinode-943302-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 cp multinode-943302-m03:/home/docker/cp-test.txt multinode-943302:/home/docker/cp-test_multinode-943302-m03_multinode-943302.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302 "sudo cat /home/docker/cp-test_multinode-943302-m03_multinode-943302.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 cp multinode-943302-m03:/home/docker/cp-test.txt multinode-943302-m02:/home/docker/cp-test_multinode-943302-m03_multinode-943302-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 ssh -n multinode-943302-m02 "sudo cat /home/docker/cp-test_multinode-943302-m03_multinode-943302-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-943302 node stop m03: (1.172749596s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-943302 status: exit status 7 (457.441171ms)

                                                
                                                
-- stdout --
	multinode-943302
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-943302-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-943302-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-943302 status --alsologtostderr: exit status 7 (455.9175ms)

                                                
                                                
-- stdout --
	multinode-943302
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-943302-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-943302-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 09:30:31.966326  549928 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:30:31.966458  549928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:30:31.966469  549928 out.go:358] Setting ErrFile to fd 2...
	I0917 09:30:31.966473  549928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:30:31.966644  549928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 09:30:31.966801  549928 out.go:352] Setting JSON to false
	I0917 09:30:31.966835  549928 mustload.go:65] Loading cluster: multinode-943302
	I0917 09:30:31.966965  549928 notify.go:220] Checking for updates...
	I0917 09:30:31.967374  549928 config.go:182] Loaded profile config "multinode-943302": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 09:30:31.967399  549928 status.go:255] checking status of multinode-943302 ...
	I0917 09:30:31.967927  549928 cli_runner.go:164] Run: docker container inspect multinode-943302 --format={{.State.Status}}
	I0917 09:30:31.986651  549928 status.go:330] multinode-943302 host status = "Running" (err=<nil>)
	I0917 09:30:31.986678  549928 host.go:66] Checking if "multinode-943302" exists ...
	I0917 09:30:31.986928  549928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-943302
	I0917 09:30:32.003492  549928 host.go:66] Checking if "multinode-943302" exists ...
	I0917 09:30:32.003787  549928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 09:30:32.003828  549928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-943302
	I0917 09:30:32.021463  549928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/multinode-943302/id_rsa Username:docker}
	I0917 09:30:32.113060  549928 ssh_runner.go:195] Run: systemctl --version
	I0917 09:30:32.117099  549928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 09:30:32.127206  549928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 09:30:32.175186  549928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-17 09:30:32.165865609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 09:30:32.175886  549928 kubeconfig.go:125] found "multinode-943302" server: "https://192.168.67.2:8443"
	I0917 09:30:32.175923  549928 api_server.go:166] Checking apiserver status ...
	I0917 09:30:32.175976  549928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 09:30:32.186431  549928 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	I0917 09:30:32.194788  549928 api_server.go:182] apiserver freezer: "8:freezer:/docker/f9f2b0caaf8ea37428e5254f4392a77f683fce81f38e4246f47e649838098fc8/crio/crio-4355d37a9bd8c6d1bb80139b3593cde35fa38c49889bb4280a81546a052507fb"
	I0917 09:30:32.194846  549928 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f9f2b0caaf8ea37428e5254f4392a77f683fce81f38e4246f47e649838098fc8/crio/crio-4355d37a9bd8c6d1bb80139b3593cde35fa38c49889bb4280a81546a052507fb/freezer.state
	I0917 09:30:32.202576  549928 api_server.go:204] freezer state: "THAWED"
	I0917 09:30:32.202609  549928 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0917 09:30:32.206344  549928 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0917 09:30:32.206377  549928 status.go:422] multinode-943302 apiserver status = Running (err=<nil>)
	I0917 09:30:32.206388  549928 status.go:257] multinode-943302 status: &{Name:multinode-943302 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 09:30:32.206410  549928 status.go:255] checking status of multinode-943302-m02 ...
	I0917 09:30:32.206659  549928 cli_runner.go:164] Run: docker container inspect multinode-943302-m02 --format={{.State.Status}}
	I0917 09:30:32.224054  549928 status.go:330] multinode-943302-m02 host status = "Running" (err=<nil>)
	I0917 09:30:32.224089  549928 host.go:66] Checking if "multinode-943302-m02" exists ...
	I0917 09:30:32.224391  549928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-943302-m02
	I0917 09:30:32.241100  549928 host.go:66] Checking if "multinode-943302-m02" exists ...
	I0917 09:30:32.241395  549928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 09:30:32.241445  549928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-943302-m02
	I0917 09:30:32.258916  549928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19648-389277/.minikube/machines/multinode-943302-m02/id_rsa Username:docker}
	I0917 09:30:32.348937  549928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 09:30:32.359364  549928 status.go:257] multinode-943302-m02 status: &{Name:multinode-943302-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 09:30:32.359399  549928 status.go:255] checking status of multinode-943302-m03 ...
	I0917 09:30:32.359690  549928 cli_runner.go:164] Run: docker container inspect multinode-943302-m03 --format={{.State.Status}}
	I0917 09:30:32.376352  549928 status.go:330] multinode-943302-m03 host status = "Stopped" (err=<nil>)
	I0917 09:30:32.376373  549928 status.go:343] host is not running, skipping remaining checks
	I0917 09:30:32.376381  549928 status.go:257] multinode-943302-m03 status: &{Name:multinode-943302-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-943302 node start m03 -v=7 --alsologtostderr: (8.298047018s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (100.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-943302
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-943302
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-943302: (24.626949167s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-943302 --wait=true -v=8 --alsologtostderr
E0917 09:31:07.542781  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:31:32.926714  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-943302 --wait=true -v=8 --alsologtostderr: (1m16.162159957s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-943302
--- PASS: TestMultiNode/serial/RestartKeepsNodes (100.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-943302 node delete m03: (4.654427594s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 stop
E0917 09:32:30.608105  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-943302 stop: (23.504444283s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-943302 status: exit status 7 (83.3799ms)

                                                
                                                
-- stdout --
	multinode-943302
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-943302-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-943302 status --alsologtostderr: exit status 7 (83.792447ms)

                                                
                                                
-- stdout --
	multinode-943302
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-943302-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 09:32:51.072778  559681 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:32:51.072924  559681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:32:51.072935  559681 out.go:358] Setting ErrFile to fd 2...
	I0917 09:32:51.072942  559681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:32:51.073122  559681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 09:32:51.073321  559681 out.go:352] Setting JSON to false
	I0917 09:32:51.073364  559681 mustload.go:65] Loading cluster: multinode-943302
	I0917 09:32:51.073472  559681 notify.go:220] Checking for updates...
	I0917 09:32:51.073836  559681 config.go:182] Loaded profile config "multinode-943302": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 09:32:51.073856  559681 status.go:255] checking status of multinode-943302 ...
	I0917 09:32:51.074318  559681 cli_runner.go:164] Run: docker container inspect multinode-943302 --format={{.State.Status}}
	I0917 09:32:51.092399  559681 status.go:330] multinode-943302 host status = "Stopped" (err=<nil>)
	I0917 09:32:51.092447  559681 status.go:343] host is not running, skipping remaining checks
	I0917 09:32:51.092466  559681 status.go:257] multinode-943302 status: &{Name:multinode-943302 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 09:32:51.092531  559681 status.go:255] checking status of multinode-943302-m02 ...
	I0917 09:32:51.092928  559681 cli_runner.go:164] Run: docker container inspect multinode-943302-m02 --format={{.State.Status}}
	I0917 09:32:51.110447  559681 status.go:330] multinode-943302-m02 host status = "Stopped" (err=<nil>)
	I0917 09:32:51.110495  559681 status.go:343] host is not running, skipping remaining checks
	I0917 09:32:51.110510  559681 status.go:257] multinode-943302-m02 status: &{Name:multinode-943302-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-943302 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-943302 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (52.213668833s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943302 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.77s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-943302
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-943302-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-943302-m02 --driver=docker  --container-runtime=crio: exit status 14 (65.048656ms)

                                                
                                                
-- stdout --
	* [multinode-943302-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-943302-m02' is duplicated with machine name 'multinode-943302-m02' in profile 'multinode-943302'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-943302-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-943302-m03 --driver=docker  --container-runtime=crio: (21.354596444s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-943302
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-943302: exit status 80 (268.202613ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-943302 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-943302-m03 already exists in multinode-943302-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-943302-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-943302-m03: (1.825505866s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.56s)

                                                
                                    
x
+
TestPreload (102.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-336224 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0917 09:34:35.992530  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-336224 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m16.487287698s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-336224 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-336224 image pull gcr.io/k8s-minikube/busybox: (1.043495148s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-336224
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-336224: (5.686951669s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-336224 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-336224 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.901161057s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-336224 image list
helpers_test.go:175: Cleaning up "test-preload-336224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-336224
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-336224: (2.294912799s)
--- PASS: TestPreload (102.64s)

                                                
                                    
x
+
TestScheduledStopUnix (95.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-663894 --memory=2048 --driver=docker  --container-runtime=crio
E0917 09:36:07.543352  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-663894 --memory=2048 --driver=docker  --container-runtime=crio: (20.145679432s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-663894 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-663894 -n scheduled-stop-663894
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-663894 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-663894 --cancel-scheduled
E0917 09:36:32.926732  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-663894 -n scheduled-stop-663894
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-663894
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-663894 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-663894
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-663894: exit status 7 (64.603616ms)

                                                
                                                
-- stdout --
	scheduled-stop-663894
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-663894 -n scheduled-stop-663894
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-663894 -n scheduled-stop-663894: exit status 7 (65.618467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-663894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-663894
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-663894: (4.377173471s)
--- PASS: TestScheduledStopUnix (95.80s)

                                                
                                    
x
+
TestInsufficientStorage (9.55s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-704769 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-704769 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.219787872s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ac5c0842-7b05-4d87-b874-0d99c990c8dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-704769] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2cad21bc-3a52-4886-a8af-d8b3e3a039ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19648"}}
	{"specversion":"1.0","id":"bd5482b5-2bd9-4ada-81cb-b687dac7b733","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"153db256-5efa-4585-84f8-a4b4449f8f5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig"}}
	{"specversion":"1.0","id":"53a8cf34-e2dc-469e-8cec-e870f0d3be8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube"}}
	{"specversion":"1.0","id":"f9e0b426-f4fc-49f8-a63e-c1a52c9022c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"00a3ea04-033e-441c-b84f-04857acc26fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5bc80714-b311-468a-810d-88628869fa05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ddc75f52-0c31-4643-bc22-7c5dd217a6a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"56793321-d630-4ac2-9690-75270f5511d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ecf5fd3-3ac0-4792-ae38-6a09eb20eb09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5fc0f19c-fff2-4e9f-b2a6-3de43d535bbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-704769\" primary control-plane node in \"insufficient-storage-704769\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"afb745e6-dca4-4a2b-96f1-b4da3ca3d26b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726358845-19644 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3031dcce-2771-4b54-8952-0f9d6f8873af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c600fbc-dff9-4da6-9cbd-7b535ba367c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-704769 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-704769 --output=json --layout=cluster: exit status 7 (266.642319ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-704769","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-704769","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 09:37:37.165652  582135 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-704769" does not appear in /home/jenkins/minikube-integration/19648-389277/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-704769 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-704769 --output=json --layout=cluster: exit status 7 (255.106446ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-704769","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-704769","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 09:37:37.421946  582236 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-704769" does not appear in /home/jenkins/minikube-integration/19648-389277/kubeconfig
	E0917 09:37:37.431608  582236 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/insufficient-storage-704769/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-704769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-704769
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-704769: (1.808148837s)
--- PASS: TestInsufficientStorage (9.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (58.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1991517778 start -p running-upgrade-624981 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1991517778 start -p running-upgrade-624981 --memory=2200 --vm-driver=docker  --container-runtime=crio: (31.287041866s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-624981 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-624981 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.085825392s)
helpers_test.go:175: Cleaning up "running-upgrade-624981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-624981
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-624981: (3.765295768s)
--- PASS: TestRunningBinaryUpgrade (58.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (328.79s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-869453 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-869453 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (53.02971819s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-869453
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-869453: (1.197874595s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-869453 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-869453 status --format={{.Host}}: exit status 7 (64.665734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-869453 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-869453 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m23.866595289s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-869453 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-869453 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-869453 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (87.362959ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-869453] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-869453
	    minikube start -p kubernetes-upgrade-869453 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8694532 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-869453 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-869453 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-869453 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (8.352592446s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-869453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-869453
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-869453: (2.113135942s)
--- PASS: TestKubernetesUpgrade (328.79s)

                                                
                                    
x
+
TestMissingContainerUpgrade (136.74s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1281070418 start -p missing-upgrade-676896 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1281070418 start -p missing-upgrade-676896 --memory=2200 --driver=docker  --container-runtime=crio: (1m15.742893228s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-676896
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-676896: (1.649831734s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-676896
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-676896 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-676896 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (56.875734504s)
helpers_test.go:175: Cleaning up "missing-upgrade-676896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-676896
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-676896: (2.054382986s)
--- PASS: TestMissingContainerUpgrade (136.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (95.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2856100596 start -p stopped-upgrade-961637 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2856100596 start -p stopped-upgrade-961637 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m7.108796022s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2856100596 -p stopped-upgrade-961637 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2856100596 -p stopped-upgrade-961637 stop: (4.265230906s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-961637 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-961637 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.654400926s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (95.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-961637
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                    
x
+
TestPause/serial/Start (47.41s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-701815 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-701815 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (47.411787557s)
--- PASS: TestPause/serial/Start (47.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-777221 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-777221 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (75.058828ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-777221] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-777221 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-777221 --driver=docker  --container-runtime=crio: (24.515362246s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-777221 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-164328 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-164328 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (170.502911ms)

                                                
                                                
-- stdout --
	* [false-164328] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19648
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 09:39:59.239764  615298 out.go:345] Setting OutFile to fd 1 ...
	I0917 09:39:59.240219  615298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:39:59.240237  615298 out.go:358] Setting ErrFile to fd 2...
	I0917 09:39:59.240245  615298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 09:39:59.240515  615298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19648-389277/.minikube/bin
	I0917 09:39:59.241304  615298 out.go:352] Setting JSON to false
	I0917 09:39:59.242841  615298 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12148,"bootTime":1726553851,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 09:39:59.242988  615298 start.go:139] virtualization: kvm guest
	I0917 09:39:59.244975  615298 out.go:177] * [false-164328] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0917 09:39:59.246803  615298 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 09:39:59.246855  615298 notify.go:220] Checking for updates...
	I0917 09:39:59.249594  615298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 09:39:59.250968  615298 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19648-389277/kubeconfig
	I0917 09:39:59.252487  615298 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19648-389277/.minikube
	I0917 09:39:59.253836  615298 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 09:39:59.255060  615298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 09:39:59.256943  615298 config.go:182] Loaded profile config "NoKubernetes-777221": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 09:39:59.257091  615298 config.go:182] Loaded profile config "kubernetes-upgrade-869453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 09:39:59.257214  615298 config.go:182] Loaded profile config "pause-701815": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0917 09:39:59.257338  615298 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 09:39:59.287252  615298 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 09:39:59.287352  615298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 09:39:59.345501  615298 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:78 SystemTime:2024-09-17 09:39:59.333235693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 09:39:59.345657  615298 docker.go:318] overlay module found
	I0917 09:39:59.347053  615298 out.go:177] * Using the docker driver based on user configuration
	I0917 09:39:59.348457  615298 start.go:297] selected driver: docker
	I0917 09:39:59.348489  615298 start.go:901] validating driver "docker" against <nil>
	I0917 09:39:59.348506  615298 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 09:39:59.350746  615298 out.go:201] 
	W0917 09:39:59.352101  615298 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0917 09:39:59.353528  615298 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-164328 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-164328

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-164328

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-164328

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-164328

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-164328

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-164328

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-164328

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-164328

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-164328

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-164328

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-164328

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-164328" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-164328" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:38:44 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-869453
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:39:53 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-701815
contexts:
- context:
cluster: kubernetes-upgrade-869453
user: kubernetes-upgrade-869453
name: kubernetes-upgrade-869453
- context:
cluster: pause-701815
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:39:53 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-701815
name: pause-701815
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-869453
user:
client-certificate: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/kubernetes-upgrade-869453/client.crt
client-key: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/kubernetes-upgrade-869453/client.key
- name: pause-701815
user:
client-certificate: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/pause-701815/client.crt
client-key: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/pause-701815/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-164328

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-164328"

                                                
                                                
----------------------- debugLogs end: false-164328 [took: 3.069122928s] --------------------------------
helpers_test.go:175: Cleaning up "false-164328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-164328
--- PASS: TestNetworkPlugins/group/false (3.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-777221 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-777221 --no-kubernetes --driver=docker  --container-runtime=crio: (3.561487519s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-777221 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-777221 status -o json: exit status 2 (265.647946ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-777221","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-777221
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-777221: (1.850778318s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-701815 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-701815 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.955610109s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-777221 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-777221 --no-kubernetes --driver=docker  --container-runtime=crio: (7.939355773s)
--- PASS: TestNoKubernetes/serial/Start (7.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-777221 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-777221 "sudo systemctl is-active --quiet service kubelet": exit status 1 (294.996935ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.840049075s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-777221
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-777221: (1.201246759s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-777221 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-777221 --driver=docker  --container-runtime=crio: (6.661583113s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-777221 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-777221 "sudo systemctl is-active --quiet service kubelet": exit status 1 (282.064325ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-701815 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-701815 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-701815 --output=json --layout=cluster: exit status 2 (391.073715ms)

                                                
                                                
-- stdout --
	{"Name":"pause-701815","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-701815","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.05s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-701815 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-701815 --alsologtostderr -v=5: (1.046805112s)
--- PASS: TestPause/serial/Unpause (1.05s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-701815 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.87s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-701815 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-701815 --alsologtostderr -v=5: (2.874408571s)
--- PASS: TestPause/serial/DeletePaused (2.87s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.82s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (13.766173929s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-701815
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-701815: exit status 1 (17.275872ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-701815: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (13.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (124.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-155649 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-155649 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m4.27291714s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (124.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-464591 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0917 09:41:32.925666  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/addons-093168/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-464591 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (53.836680909s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-464591 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3673e6c7-fdf5-455f-ae34-1f96d90c95a8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3673e6c7-fdf5-455f-ae34-1f96d90c95a8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004695536s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-464591 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-464591 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-464591 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-464591 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-464591 --alsologtostderr -v=3: (11.818837032s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-464591 -n no-preload-464591
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-464591 -n no-preload-464591: exit status 7 (68.039347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-464591 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (261.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-464591 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-464591 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m21.493919987s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-464591 -n no-preload-464591
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (261.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-155649 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d352a28d-5fa5-41ed-9de3-0e5e4ca4a230] Pending
helpers_test.go:344: "busybox" [d352a28d-5fa5-41ed-9de3-0e5e4ca4a230] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d352a28d-5fa5-41ed-9de3-0e5e4ca4a230] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004782124s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-155649 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (38.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-649251 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-649251 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (38.08689789s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (38.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-155649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-155649 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-155649 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-155649 --alsologtostderr -v=3: (12.061277255s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-155649 -n old-k8s-version-155649
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-155649 -n old-k8s-version-155649: exit status 7 (81.897727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-155649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (127.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-155649 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-155649 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m7.059830692s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-155649 -n old-k8s-version-155649
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (127.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-649251 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7273f800-eb5b-4d16-b0f1-3f54bedc603c] Pending
helpers_test.go:344: "busybox" [7273f800-eb5b-4d16-b0f1-3f54bedc603c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7273f800-eb5b-4d16-b0f1-3f54bedc603c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00403255s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-649251 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-649251 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-649251 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-649251 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-649251 --alsologtostderr -v=3: (11.985997824s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-649251 -n embed-certs-649251
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-649251 -n embed-certs-649251: exit status 7 (69.524062ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-649251 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-649251 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-649251 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m22.667192803s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-649251 -n embed-certs-649251
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-951221 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-951221 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (40.951893327s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-951221 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [481a66e9-e6b7-4f39-8795-59ad5742da15] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [481a66e9-e6b7-4f39-8795-59ad5742da15] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003897601s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-951221 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-951221 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-951221 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-951221 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-951221 --alsologtostderr -v=3: (11.846998579s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-951221 -n default-k8s-diff-port-951221
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-951221 -n default-k8s-diff-port-951221: exit status 7 (68.592125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-951221 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-951221 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-951221 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m22.450880286s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-951221 -n default-k8s-diff-port-951221
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-b859q" [ef89b6b1-8b57-4167-b6bd-ad96547a8f0b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00400979s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-b859q" [ef89b6b1-8b57-4167-b6bd-ad96547a8f0b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003481622s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-155649 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-155649 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-155649 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-155649 -n old-k8s-version-155649
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-155649 -n old-k8s-version-155649: exit status 2 (291.161319ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-155649 -n old-k8s-version-155649
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-155649 -n old-k8s-version-155649: exit status 2 (290.798479ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-155649 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-155649 -n old-k8s-version-155649
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-155649 -n old-k8s-version-155649
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-883479 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0917 09:46:07.542914  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-883479 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (26.679342948s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-883479 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-883479 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-883479 --alsologtostderr -v=3: (1.182920184s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-883479 -n newest-cni-883479
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-883479 -n newest-cni-883479: exit status 7 (68.039644ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-883479 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-883479 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-883479 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (12.111199381s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-883479 -n newest-cni-883479
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-883479 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-883479 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-883479 -n newest-cni-883479
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-883479 -n newest-cni-883479: exit status 2 (290.605154ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-883479 -n newest-cni-883479
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-883479 -n newest-cni-883479: exit status 2 (291.992411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-883479 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-883479 -n newest-cni-883479
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-883479 -n newest-cni-883479
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (38.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (38.299129556s)
--- PASS: TestNetworkPlugins/group/auto/Start (38.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4bnwf" [b0e99085-9723-4877-900c-dfb47bb66094] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004299942s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4bnwf" [b0e99085-9723-4877-900c-dfb47bb66094] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003637473s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-464591 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-464591 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-464591 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-464591 -n no-preload-464591
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-464591 -n no-preload-464591: exit status 2 (309.333215ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-464591 -n no-preload-464591
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-464591 -n no-preload-464591: exit status 2 (323.718585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-464591 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-464591 -n no-preload-464591
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-464591 -n no-preload-464591
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-164328 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-164328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z8jmd" [49f3e673-40bb-422c-b9ca-cb802caeaf05] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z8jmd" [49f3e673-40bb-422c-b9ca-cb802caeaf05] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004630537s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0917 09:47:16.997225  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:47:17.003506  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:47:17.016220  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:47:17.037675  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:47:17.079599  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:47:17.161917  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:47:17.323326  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:47:17.645235  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:47:18.286602  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:47:19.567974  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:47:22.129729  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m10.84177653s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-164328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0917 09:47:57.975210  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:48:00.898439  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:48:00.904836  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:48:00.916265  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:48:00.937743  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:48:00.979134  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:48:01.060627  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:48:01.222388  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:48:01.543752  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:48:02.186017  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:48:03.467728  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:48:06.030051  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:48:11.151632  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:48:21.393280  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.392705471s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-q2dgp" [5aba6e63-7e7b-40f6-9dec-393e2b910b2c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00356131s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lb9fj" [b3900259-040c-4889-a95f-7d0987fd8257] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004581018s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-g5fbj" [c5e7b93c-1199-4414-b357-cbece28b3c2a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004388959s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-164328 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-164328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c5mdc" [51ad1069-fbdb-48b9-8cd5-abdb00fd1e63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c5mdc" [51ad1069-fbdb-48b9-8cd5-abdb00fd1e63] Running
E0917 09:48:38.937237  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004793501s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lb9fj" [b3900259-040c-4889-a95f-7d0987fd8257] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.111109203s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-649251 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-164328 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-164328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xw8t9" [d4bb15f2-2a06-4ab8-9c17-30eb047cea9e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xw8t9" [d4bb15f2-2a06-4ab8-9c17-30eb047cea9e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004440329s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-649251 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-649251 --alsologtostderr -v=1
E0917 09:48:41.874856  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-649251 -n embed-certs-649251
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-649251 -n embed-certs-649251: exit status 2 (288.645156ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-649251 -n embed-certs-649251
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-649251 -n embed-certs-649251: exit status 2 (293.107723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-649251 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-649251 -n embed-certs-649251
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-649251 -n embed-certs-649251
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-164328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-164328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (50.593956191s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (52.896275461s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0917 09:49:10.610132  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
E0917 09:49:22.837061  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/old-k8s-version-155649/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m11.367209912s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-164328 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-164328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-48hrg" [a4c65075-fd25-491b-bb7b-3872c35d45ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-48hrg" [a4c65075-fd25-491b-bb7b-3872c35d45ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003502936s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m8xb7" [a5d21800-6da7-408c-8072-4aa09961e9da] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004394364s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m8xb7" [a5d21800-6da7-408c-8072-4aa09961e9da] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003553518s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-951221 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-164328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-951221 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-951221 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-951221 -n default-k8s-diff-port-951221
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-951221 -n default-k8s-diff-port-951221: exit status 2 (287.940356ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-951221 -n default-k8s-diff-port-951221
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-951221 -n default-k8s-diff-port-951221: exit status 2 (294.687966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-951221 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-951221 -n default-k8s-diff-port-951221
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-951221 -n default-k8s-diff-port-951221
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-164328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m4.692899711s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-w86qp" [24780ad7-6971-4b44-bac4-70866d62d82f] Running
E0917 09:50:00.859292  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/no-preload-464591/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003749509s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-164328 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-164328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-92zlb" [a4500b3b-e1ad-4f7b-82e1-ddf03d1070cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-92zlb" [a4500b3b-e1ad-4f7b-82e1-ddf03d1070cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003432742s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-164328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-164328 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-164328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6mrdr" [8bef1838-db51-445c-8e97-ea925bae4389] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6mrdr" [8bef1838-db51-445c-8e97-ea925bae4389] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.0038464s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-164328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-164328 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-164328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wzvrk" [4e1026ab-4209-4e60-8acc-5f5e7e76046d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wzvrk" [4e1026ab-4209-4e60-8acc-5f5e7e76046d] Running
E0917 09:51:07.543353  396125 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/functional-554247/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003644488s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-164328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-164328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    

Test skip (25/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-391839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-391839
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-164328 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-164328

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-164328

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-164328

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-164328

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-164328

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-164328

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-164328

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-164328

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-164328

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-164328

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-164328

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-164328" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-164328" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:38:44 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-869453
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:39:53 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-701815
contexts:
- context:
cluster: kubernetes-upgrade-869453
user: kubernetes-upgrade-869453
name: kubernetes-upgrade-869453
- context:
cluster: pause-701815
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:39:53 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-701815
name: pause-701815
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-869453
user:
client-certificate: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/kubernetes-upgrade-869453/client.crt
client-key: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/kubernetes-upgrade-869453/client.key
- name: pause-701815
user:
client-certificate: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/pause-701815/client.crt
client-key: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/pause-701815/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-164328

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-164328"

                                                
                                                
----------------------- debugLogs end: kubenet-164328 [took: 3.036718507s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-164328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-164328
--- SKIP: TestNetworkPlugins/group/kubenet (3.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-164328 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-164328" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:40:01 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-777221
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:38:44 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-869453
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19648-389277/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:39:53 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-701815
contexts:
- context:
cluster: NoKubernetes-777221
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:40:01 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: NoKubernetes-777221
name: NoKubernetes-777221
- context:
cluster: kubernetes-upgrade-869453
user: kubernetes-upgrade-869453
name: kubernetes-upgrade-869453
- context:
cluster: pause-701815
extensions:
- extension:
last-update: Tue, 17 Sep 2024 09:39:53 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-701815
name: pause-701815
current-context: NoKubernetes-777221
kind: Config
preferences: {}
users:
- name: NoKubernetes-777221
user:
client-certificate: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/NoKubernetes-777221/client.crt
client-key: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/NoKubernetes-777221/client.key
- name: kubernetes-upgrade-869453
user:
client-certificate: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/kubernetes-upgrade-869453/client.crt
client-key: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/kubernetes-upgrade-869453/client.key
- name: pause-701815
user:
client-certificate: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/pause-701815/client.crt
client-key: /home/jenkins/minikube-integration/19648-389277/.minikube/profiles/pause-701815/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-164328

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-164328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-164328"

                                                
                                                
----------------------- debugLogs end: cilium-164328 [took: 3.320446103s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-164328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-164328
--- SKIP: TestNetworkPlugins/group/cilium (3.48s)

                                                
                                    
Copied to clipboard