Test Report: Docker_Linux_crio_arm64 19679

                    
                      7cae0481c1ae024841826a3639f158d099448b48:2024-09-20:36298
                    
                

Test fail (3/327)

Order failed test Duration
33 TestAddons/parallel/Registry 75.36
34 TestAddons/parallel/Ingress 152.08
36 TestAddons/parallel/MetricsServer 350.48
x
+
TestAddons/parallel/Registry (75.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.86281ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-w8gt6" [ded46fe6-d8da-4546-81fd-d1f1949dcadb] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003216719s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8ghgp" [5a98470b-31f7-4f1c-9586-f681f375453b] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003783273s
addons_test.go:338: (dbg) Run:  kubectl --context addons-060912 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-060912 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-060912 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.113694382s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-060912 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 ip
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-060912
helpers_test.go:235: (dbg) docker inspect addons-060912:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5",
	        "Created": "2024-09-20T18:52:39.740365125Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 594367,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T18:52:39.865408091Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5/hosts",
	        "LogPath": "/var/lib/docker/containers/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5-json.log",
	        "Name": "/addons-060912",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-060912:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-060912",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/68e9eff537701289758e11436d45f5a20dac5511c49bb17c6c279ea9a0f2ee99-init/diff:/var/lib/docker/overlay2/a92e9e9bba1980ffadfbad04ca227253691a545526e59e24c9fd42023a78d162/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68e9eff537701289758e11436d45f5a20dac5511c49bb17c6c279ea9a0f2ee99/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68e9eff537701289758e11436d45f5a20dac5511c49bb17c6c279ea9a0f2ee99/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68e9eff537701289758e11436d45f5a20dac5511c49bb17c6c279ea9a0f2ee99/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-060912",
	                "Source": "/var/lib/docker/volumes/addons-060912/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-060912",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-060912",
	                "name.minikube.sigs.k8s.io": "addons-060912",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e9d76a1d4f78b17f57be343ce89cd0030fce0fd6b21bfc9013be4de1e162bf8",
	            "SandboxKey": "/var/run/docker/netns/5e9d76a1d4f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-060912": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "01fa9f6b959f74a22901f7d7f124f8f0aa8983b8fa8db0965f1c5571e7649814",
	                    "EndpointID": "a39b41b3ad3e63a6fe1c844d5ffbf7cf765e19876c05de1e6494d1a2189fa00b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-060912",
	                        "f46765527c33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-060912 -n addons-060912
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-060912 logs -n 25: (1.689143079s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-469167   | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |                     |
	|         | -p download-only-469167              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| delete  | -p download-only-469167              | download-only-469167   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| start   | -o=json --download-only              | download-only-447269   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | -p download-only-447269              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| delete  | -p download-only-447269              | download-only-447269   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| delete  | -p download-only-469167              | download-only-469167   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| delete  | -p download-only-447269              | download-only-447269   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| start   | --download-only -p                   | download-docker-266880 | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | download-docker-266880               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-266880            | download-docker-266880 | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| start   | --download-only -p                   | binary-mirror-083327   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | binary-mirror-083327                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44087               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-083327              | binary-mirror-083327   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| addons  | enable dashboard -p                  | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | addons-060912                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | addons-060912                        |                        |         |         |                     |                     |
	| start   | -p addons-060912 --wait=true         | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:55 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:03 UTC | 20 Sep 24 19:03 UTC |
	|         | -p addons-060912                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-060912 addons disable         | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:03 UTC | 20 Sep 24 19:03 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-060912 ip                     | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:04 UTC | 20 Sep 24 19:04 UTC |
	| addons  | addons-060912 addons                 | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:04 UTC |                     |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-060912 addons disable         | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:04 UTC | 20 Sep 24 19:04 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:52:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:52:15.407585  593872 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:52:15.407747  593872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:52:15.407757  593872 out.go:358] Setting ErrFile to fd 2...
	I0920 18:52:15.407763  593872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:52:15.408019  593872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
	I0920 18:52:15.408464  593872 out.go:352] Setting JSON to false
	I0920 18:52:15.409334  593872 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9286,"bootTime":1726849050,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 18:52:15.409413  593872 start.go:139] virtualization:  
	I0920 18:52:15.412765  593872 out.go:177] * [addons-060912] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 18:52:15.415653  593872 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:52:15.415768  593872 notify.go:220] Checking for updates...
	I0920 18:52:15.421427  593872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:52:15.424323  593872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	I0920 18:52:15.427237  593872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	I0920 18:52:15.429911  593872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 18:52:15.432646  593872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:52:15.435403  593872 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:52:15.470290  593872 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:52:15.470417  593872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:52:15.520925  593872 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 18:52:15.51145031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:52:15.521041  593872 docker.go:318] overlay module found
	I0920 18:52:15.523900  593872 out.go:177] * Using the docker driver based on user configuration
	I0920 18:52:15.526500  593872 start.go:297] selected driver: docker
	I0920 18:52:15.526517  593872 start.go:901] validating driver "docker" against <nil>
	I0920 18:52:15.526531  593872 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:52:15.527216  593872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:52:15.581330  593872 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 18:52:15.571863527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:52:15.581548  593872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:52:15.581786  593872 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:52:15.584366  593872 out.go:177] * Using Docker driver with root privileges
	I0920 18:52:15.587045  593872 cni.go:84] Creating CNI manager for ""
	I0920 18:52:15.587107  593872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:52:15.587121  593872 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:52:15.587223  593872 start.go:340] cluster config:
	{Name:addons-060912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:52:15.590219  593872 out.go:177] * Starting "addons-060912" primary control-plane node in "addons-060912" cluster
	I0920 18:52:15.592826  593872 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 18:52:15.595652  593872 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 18:52:15.598342  593872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:52:15.598399  593872 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0920 18:52:15.598412  593872 cache.go:56] Caching tarball of preloaded images
	I0920 18:52:15.598446  593872 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 18:52:15.598514  593872 preload.go:172] Found /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0920 18:52:15.598525  593872 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:52:15.598880  593872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/config.json ...
	I0920 18:52:15.598952  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/config.json: {Name:mk641e5e8bae111e7b0856105b10230ca65c9fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:15.614244  593872 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:52:15.614382  593872 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 18:52:15.614407  593872 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 18:52:15.614416  593872 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 18:52:15.614424  593872 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 18:52:15.614429  593872 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 18:52:32.649742  593872 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 18:52:32.649783  593872 cache.go:194] Successfully downloaded all kic artifacts
	I0920 18:52:32.649812  593872 start.go:360] acquireMachinesLock for addons-060912: {Name:mkdf9efeada37d375617519bd8189e870133c61c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:52:32.649937  593872 start.go:364] duration metric: took 105.149µs to acquireMachinesLock for "addons-060912"
	I0920 18:52:32.649968  593872 start.go:93] Provisioning new machine with config: &{Name:addons-060912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:52:32.650096  593872 start.go:125] createHost starting for "" (driver="docker")
	I0920 18:52:32.652781  593872 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 18:52:32.653060  593872 start.go:159] libmachine.API.Create for "addons-060912" (driver="docker")
	I0920 18:52:32.653099  593872 client.go:168] LocalClient.Create starting
	I0920 18:52:32.653230  593872 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem
	I0920 18:52:32.860960  593872 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/cert.pem
	I0920 18:52:33.807141  593872 cli_runner.go:164] Run: docker network inspect addons-060912 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 18:52:33.822909  593872 cli_runner.go:211] docker network inspect addons-060912 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 18:52:33.823003  593872 network_create.go:284] running [docker network inspect addons-060912] to gather additional debugging logs...
	I0920 18:52:33.823041  593872 cli_runner.go:164] Run: docker network inspect addons-060912
	W0920 18:52:33.836862  593872 cli_runner.go:211] docker network inspect addons-060912 returned with exit code 1
	I0920 18:52:33.836897  593872 network_create.go:287] error running [docker network inspect addons-060912]: docker network inspect addons-060912: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-060912 not found
	I0920 18:52:33.836912  593872 network_create.go:289] output of [docker network inspect addons-060912]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-060912 not found
	
	** /stderr **
	I0920 18:52:33.837018  593872 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:52:33.853516  593872 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400048fc60}
	I0920 18:52:33.853561  593872 network_create.go:124] attempt to create docker network addons-060912 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 18:52:33.853624  593872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-060912 addons-060912
	I0920 18:52:33.925138  593872 network_create.go:108] docker network addons-060912 192.168.49.0/24 created
	I0920 18:52:33.925170  593872 kic.go:121] calculated static IP "192.168.49.2" for the "addons-060912" container
	I0920 18:52:33.925251  593872 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 18:52:33.939300  593872 cli_runner.go:164] Run: docker volume create addons-060912 --label name.minikube.sigs.k8s.io=addons-060912 --label created_by.minikube.sigs.k8s.io=true
	I0920 18:52:33.956121  593872 oci.go:103] Successfully created a docker volume addons-060912
	I0920 18:52:33.956221  593872 cli_runner.go:164] Run: docker run --rm --name addons-060912-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-060912 --entrypoint /usr/bin/test -v addons-060912:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 18:52:35.542485  593872 cli_runner.go:217] Completed: docker run --rm --name addons-060912-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-060912 --entrypoint /usr/bin/test -v addons-060912:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (1.586222321s)
	I0920 18:52:35.542517  593872 oci.go:107] Successfully prepared a docker volume addons-060912
	I0920 18:52:35.542537  593872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:52:35.542557  593872 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 18:52:35.542630  593872 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-060912:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 18:52:39.667870  593872 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-060912:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.12519698s)
	I0920 18:52:39.667901  593872 kic.go:203] duration metric: took 4.125341455s to extract preloaded images to volume ...
	W0920 18:52:39.668057  593872 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 18:52:39.668171  593872 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 18:52:39.725179  593872 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-060912 --name addons-060912 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-060912 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-060912 --network addons-060912 --ip 192.168.49.2 --volume addons-060912:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 18:52:40.064748  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Running}}
	I0920 18:52:40.090550  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:52:40.119088  593872 cli_runner.go:164] Run: docker exec addons-060912 stat /var/lib/dpkg/alternatives/iptables
	I0920 18:52:40.194481  593872 oci.go:144] the created container "addons-060912" has a running status.
	I0920 18:52:40.194657  593872 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa...
	I0920 18:52:40.558917  593872 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 18:52:40.602421  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:52:40.629886  593872 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 18:52:40.629905  593872 kic_runner.go:114] Args: [docker exec --privileged addons-060912 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 18:52:40.708677  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:52:40.734009  593872 machine.go:93] provisionDockerMachine start ...
	I0920 18:52:40.734111  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:40.755383  593872 main.go:141] libmachine: Using SSH client type: native
	I0920 18:52:40.755665  593872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:52:40.755687  593872 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:52:40.930414  593872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-060912
	
	I0920 18:52:40.930441  593872 ubuntu.go:169] provisioning hostname "addons-060912"
	I0920 18:52:40.930507  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:40.955848  593872 main.go:141] libmachine: Using SSH client type: native
	I0920 18:52:40.956093  593872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:52:40.956114  593872 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-060912 && echo "addons-060912" | sudo tee /etc/hostname
	I0920 18:52:41.124769  593872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-060912
	
	I0920 18:52:41.124926  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:41.150096  593872 main.go:141] libmachine: Using SSH client type: native
	I0920 18:52:41.150348  593872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:52:41.150366  593872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-060912' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-060912/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-060912' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:52:41.295129  593872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:52:41.295158  593872 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19679-586329/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-586329/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-586329/.minikube}
	I0920 18:52:41.295190  593872 ubuntu.go:177] setting up certificates
	I0920 18:52:41.295203  593872 provision.go:84] configureAuth start
	I0920 18:52:41.295277  593872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-060912
	I0920 18:52:41.317921  593872 provision.go:143] copyHostCerts
	I0920 18:52:41.318013  593872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-586329/.minikube/ca.pem (1082 bytes)
	I0920 18:52:41.318141  593872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-586329/.minikube/cert.pem (1123 bytes)
	I0920 18:52:41.318206  593872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-586329/.minikube/key.pem (1679 bytes)
	I0920 18:52:41.318258  593872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-586329/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca-key.pem org=jenkins.addons-060912 san=[127.0.0.1 192.168.49.2 addons-060912 localhost minikube]
	I0920 18:52:42.112316  593872 provision.go:177] copyRemoteCerts
	I0920 18:52:42.112394  593872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:52:42.112441  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.134267  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:42.242047  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:52:42.271920  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:52:42.299774  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:52:42.328079  593872 provision.go:87] duration metric: took 1.032855668s to configureAuth
	I0920 18:52:42.328107  593872 ubuntu.go:193] setting minikube options for container-runtime
	I0920 18:52:42.328339  593872 config.go:182] Loaded profile config "addons-060912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:52:42.328485  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.347344  593872 main.go:141] libmachine: Using SSH client type: native
	I0920 18:52:42.347620  593872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:52:42.347642  593872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:52:42.592794  593872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:52:42.592858  593872 machine.go:96] duration metric: took 1.858825465s to provisionDockerMachine
	I0920 18:52:42.592883  593872 client.go:171] duration metric: took 9.939773855s to LocalClient.Create
	I0920 18:52:42.592928  593872 start.go:167] duration metric: took 9.939858146s to libmachine.API.Create "addons-060912"
	I0920 18:52:42.592956  593872 start.go:293] postStartSetup for "addons-060912" (driver="docker")
	I0920 18:52:42.592983  593872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:52:42.593088  593872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:52:42.593176  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.610673  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:42.712244  593872 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:52:42.715200  593872 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 18:52:42.715236  593872 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 18:52:42.715248  593872 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 18:52:42.715255  593872 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 18:52:42.715270  593872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-586329/.minikube/addons for local assets ...
	I0920 18:52:42.715339  593872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-586329/.minikube/files for local assets ...
	I0920 18:52:42.715362  593872 start.go:296] duration metric: took 122.386575ms for postStartSetup
	I0920 18:52:42.715678  593872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-060912
	I0920 18:52:42.734222  593872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/config.json ...
	I0920 18:52:42.734515  593872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:52:42.734561  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.751254  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:42.847551  593872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 18:52:42.851990  593872 start.go:128] duration metric: took 10.201875795s to createHost
	I0920 18:52:42.852014  593872 start.go:83] releasing machines lock for "addons-060912", held for 10.20206475s
	I0920 18:52:42.852104  593872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-060912
	I0920 18:52:42.869047  593872 ssh_runner.go:195] Run: cat /version.json
	I0920 18:52:42.869104  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.869386  593872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:52:42.869455  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.899611  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:42.901003  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:43.143986  593872 ssh_runner.go:195] Run: systemctl --version
	I0920 18:52:43.148494  593872 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:52:43.290058  593872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 18:52:43.294460  593872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:52:43.319067  593872 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0920 18:52:43.319189  593872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:52:43.355578  593872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 18:52:43.355601  593872 start.go:495] detecting cgroup driver to use...
	I0920 18:52:43.355665  593872 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 18:52:43.355740  593872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:52:43.372488  593872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:52:43.384584  593872 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:52:43.384660  593872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:52:43.398596  593872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:52:43.413969  593872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:52:43.506921  593872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:52:43.598933  593872 docker.go:233] disabling docker service ...
	I0920 18:52:43.599074  593872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:52:43.619211  593872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:52:43.632097  593872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:52:43.733486  593872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:52:43.832796  593872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:52:43.844479  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:52:43.861973  593872 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:52:43.862048  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.873308  593872 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:52:43.873384  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.884037  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.894744  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.905984  593872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:52:43.916341  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.926330  593872 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.942760  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.952451  593872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:52:43.961121  593872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:52:43.969336  593872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:52:44.051836  593872 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:52:44.177573  593872 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:52:44.177688  593872 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:52:44.181787  593872 start.go:563] Will wait 60s for crictl version
	I0920 18:52:44.181856  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:52:44.185690  593872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:52:44.231062  593872 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0920 18:52:44.231227  593872 ssh_runner.go:195] Run: crio --version
	I0920 18:52:44.269973  593872 ssh_runner.go:195] Run: crio --version
	I0920 18:52:44.310781  593872 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0920 18:52:44.313034  593872 cli_runner.go:164] Run: docker network inspect addons-060912 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:52:44.329327  593872 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 18:52:44.332861  593872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:52:44.343516  593872 kubeadm.go:883] updating cluster {Name:addons-060912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:52:44.343644  593872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:52:44.343708  593872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:52:44.419323  593872 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:52:44.419350  593872 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:52:44.419407  593872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:52:44.460038  593872 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:52:44.460063  593872 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:52:44.460072  593872 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0920 18:52:44.460202  593872 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-060912 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:52:44.460306  593872 ssh_runner.go:195] Run: crio config
	I0920 18:52:44.514388  593872 cni.go:84] Creating CNI manager for ""
	I0920 18:52:44.514413  593872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:52:44.514425  593872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:52:44.514455  593872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-060912 NodeName:addons-060912 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:52:44.514692  593872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-060912"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:52:44.514779  593872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:52:44.524006  593872 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:52:44.524086  593872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:52:44.532920  593872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0920 18:52:44.550839  593872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:52:44.569315  593872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0920 18:52:44.588095  593872 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 18:52:44.591834  593872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:52:44.603202  593872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:52:44.683106  593872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:52:44.698119  593872 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912 for IP: 192.168.49.2
	I0920 18:52:44.698180  593872 certs.go:194] generating shared ca certs ...
	I0920 18:52:44.698214  593872 certs.go:226] acquiring lock for ca certs: {Name:mk7eb18302258cdace745a9485ebacfefa55b617 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:44.698372  593872 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-586329/.minikube/ca.key
	I0920 18:52:45.773992  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/ca.crt ...
	I0920 18:52:45.774024  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/ca.crt: {Name:mk69bb3c03ec081974b98f7c83bdeca9a6b769c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:45.774223  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/ca.key ...
	I0920 18:52:45.774236  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/ca.key: {Name:mkb28aa16c08ff68a5c63f20cf7a4bc238a65fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:45.774329  593872 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.key
	I0920 18:52:46.306094  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.crt ...
	I0920 18:52:46.306172  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.crt: {Name:mk13a902be7ee771aaabf84d4d3b54c93512ec07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.306433  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.key ...
	I0920 18:52:46.306468  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.key: {Name:mk1a89b4cc2e765480e21d5ef942bf06a139d088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.307202  593872 certs.go:256] generating profile certs ...
	I0920 18:52:46.307348  593872 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.key
	I0920 18:52:46.307374  593872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt with IP's: []
	I0920 18:52:46.605180  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt ...
	I0920 18:52:46.605217  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: {Name:mk8ec6a9f7340d97847cfc91d6f9300f0c6bcb28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.605895  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.key ...
	I0920 18:52:46.605916  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.key: {Name:mk386836124c30368ae858b7208f9c6a723630c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.606065  593872 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key.2a5409c2
	I0920 18:52:46.606089  593872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt.2a5409c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 18:52:46.979328  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt.2a5409c2 ...
	I0920 18:52:46.979362  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt.2a5409c2: {Name:mk3de371d8cb695b97e343d91e61d450c7d1fceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.980031  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key.2a5409c2 ...
	I0920 18:52:46.980049  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key.2a5409c2: {Name:mk9c2eba1553b51025132aa06ce9c8b0e76efbd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.980539  593872 certs.go:381] copying /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt.2a5409c2 -> /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt
	I0920 18:52:46.980627  593872 certs.go:385] copying /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key.2a5409c2 -> /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key
	I0920 18:52:46.980686  593872 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.key
	I0920 18:52:46.980709  593872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.crt with IP's: []
	I0920 18:52:47.324830  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.crt ...
	I0920 18:52:47.324865  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.crt: {Name:mk4ae1dd5d3ae6c97cd47828e57b9a54fe850ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:47.325050  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.key ...
	I0920 18:52:47.325068  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.key: {Name:mkc15d867a2714a19ac6e38280d1d8789074dcb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:47.325295  593872 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:52:47.325345  593872 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:52:47.325375  593872 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:52:47.325407  593872 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/key.pem (1679 bytes)
	I0920 18:52:47.326508  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:52:47.355471  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 18:52:47.380228  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:52:47.404994  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:52:47.431136  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:52:47.456460  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:52:47.482481  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:52:47.506787  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:52:47.530822  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:52:47.555789  593872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:52:47.573794  593872 ssh_runner.go:195] Run: openssl version
	I0920 18:52:47.579677  593872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:52:47.589418  593872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:47.593050  593872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:52 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:47.593170  593872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:47.600533  593872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:52:47.610126  593872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:52:47.613505  593872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:52:47.613554  593872 kubeadm.go:392] StartCluster: {Name:addons-060912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:52:47.613633  593872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:52:47.613691  593872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:52:47.655005  593872 cri.go:89] found id: ""
	I0920 18:52:47.655106  593872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:52:47.664307  593872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:52:47.673271  593872 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 18:52:47.673378  593872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:52:47.682354  593872 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:52:47.682377  593872 kubeadm.go:157] found existing configuration files:
	
	I0920 18:52:47.682450  593872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:52:47.692197  593872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:52:47.692269  593872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:52:47.701005  593872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:52:47.709846  593872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:52:47.709939  593872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:52:47.718667  593872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:52:47.727606  593872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:52:47.727692  593872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:52:47.736256  593872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:52:47.745178  593872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:52:47.745277  593872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:52:47.753885  593872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 18:52:47.794524  593872 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:52:47.794742  593872 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:52:47.830867  593872 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 18:52:47.831080  593872 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 18:52:47.831147  593872 kubeadm.go:310] OS: Linux
	I0920 18:52:47.831230  593872 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 18:52:47.831314  593872 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 18:52:47.831391  593872 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 18:52:47.831469  593872 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 18:52:47.831550  593872 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 18:52:47.831627  593872 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 18:52:47.831704  593872 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 18:52:47.831782  593872 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 18:52:47.831867  593872 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 18:52:47.892879  593872 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:52:47.893045  593872 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:52:47.893173  593872 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:52:47.900100  593872 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:52:47.904767  593872 out.go:235]   - Generating certificates and keys ...
	I0920 18:52:47.904883  593872 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:52:47.904967  593872 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:52:48.301483  593872 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:52:48.505712  593872 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:52:48.627729  593872 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:52:49.408566  593872 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:52:49.585470  593872 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:52:49.585855  593872 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-060912 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:52:50.403787  593872 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:52:50.404133  593872 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-060912 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:52:50.541148  593872 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:52:50.956925  593872 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:52:51.982371  593872 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:52:51.982653  593872 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:52:52.374506  593872 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:52:52.684664  593872 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:52:53.299054  593872 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:52:53.724444  593872 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:52:54.066667  593872 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:52:54.067475  593872 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:52:54.070541  593872 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:52:54.072885  593872 out.go:235]   - Booting up control plane ...
	I0920 18:52:54.072994  593872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:52:54.073071  593872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:52:54.073988  593872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:52:54.087870  593872 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:52:54.094550  593872 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:52:54.094874  593872 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:52:54.193678  593872 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:52:54.193802  593872 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:52:55.195346  593872 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001638684s
	I0920 18:52:55.195439  593872 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:53:00.697120  593872 kubeadm.go:310] [api-check] The API server is healthy after 5.501870038s
	I0920 18:53:00.728997  593872 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:53:00.750818  593872 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:53:00.777564  593872 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:53:00.777765  593872 kubeadm.go:310] [mark-control-plane] Marking the node addons-060912 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:53:00.791626  593872 kubeadm.go:310] [bootstrap-token] Using token: 3mukj1.5gr6p80qxuq1esbm
	I0920 18:53:00.793695  593872 out.go:235]   - Configuring RBAC rules ...
	I0920 18:53:00.793825  593872 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:53:00.798878  593872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:53:00.806432  593872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:53:00.810066  593872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:53:00.815042  593872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:53:00.818568  593872 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:53:01.105603  593872 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:53:01.548911  593872 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:53:02.106478  593872 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:53:02.106507  593872 kubeadm.go:310] 
	I0920 18:53:02.106578  593872 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:53:02.106584  593872 kubeadm.go:310] 
	I0920 18:53:02.106721  593872 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:53:02.106734  593872 kubeadm.go:310] 
	I0920 18:53:02.106772  593872 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:53:02.106834  593872 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:53:02.106884  593872 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:53:02.106888  593872 kubeadm.go:310] 
	I0920 18:53:02.106941  593872 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:53:02.106945  593872 kubeadm.go:310] 
	I0920 18:53:02.106992  593872 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:53:02.106997  593872 kubeadm.go:310] 
	I0920 18:53:02.107062  593872 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:53:02.107137  593872 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:53:02.107203  593872 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:53:02.107208  593872 kubeadm.go:310] 
	I0920 18:53:02.107290  593872 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:53:02.107368  593872 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:53:02.107373  593872 kubeadm.go:310] 
	I0920 18:53:02.107455  593872 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3mukj1.5gr6p80qxuq1esbm \
	I0920 18:53:02.107556  593872 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee5188aaaabb34e982a2e59e30a557aaa604ab6ab39002e0379fe9f0994613c \
	I0920 18:53:02.107576  593872 kubeadm.go:310] 	--control-plane 
	I0920 18:53:02.107579  593872 kubeadm.go:310] 
	I0920 18:53:02.107664  593872 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:53:02.107668  593872 kubeadm.go:310] 
	I0920 18:53:02.107748  593872 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3mukj1.5gr6p80qxuq1esbm \
	I0920 18:53:02.107850  593872 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee5188aaaabb34e982a2e59e30a557aaa604ab6ab39002e0379fe9f0994613c 
	I0920 18:53:02.110386  593872 kubeadm.go:310] W0920 18:52:47.790907    1181 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:53:02.110692  593872 kubeadm.go:310] W0920 18:52:47.791995    1181 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:53:02.110919  593872 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0920 18:53:02.111098  593872 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:53:02.111123  593872 cni.go:84] Creating CNI manager for ""
	I0920 18:53:02.111136  593872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:53:02.113349  593872 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 18:53:02.115174  593872 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 18:53:02.119312  593872 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 18:53:02.119335  593872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 18:53:02.142105  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 18:53:02.431658  593872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:53:02.431817  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:02.431901  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-060912 minikube.k8s.io/updated_at=2024_09_20T18_53_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=addons-060912 minikube.k8s.io/primary=true
	I0920 18:53:02.446105  593872 ops.go:34] apiserver oom_adj: -16
	I0920 18:53:02.565999  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:03.066570  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:03.566114  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:04.066703  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:04.566700  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:05.066202  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:05.566810  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:06.066185  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:06.566942  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:06.656897  593872 kubeadm.go:1113] duration metric: took 4.225127214s to wait for elevateKubeSystemPrivileges
	I0920 18:53:06.656923  593872 kubeadm.go:394] duration metric: took 19.04337458s to StartCluster
	I0920 18:53:06.656941  593872 settings.go:142] acquiring lock: {Name:mk20a33ee294fe7ee1acfd59cbfa4fb0357cdddf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:53:06.657086  593872 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-586329/kubeconfig
	I0920 18:53:06.657504  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/kubeconfig: {Name:mke1c46b803a8499b182d8427df0204efbd97826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:53:06.658369  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:53:06.658394  593872 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:53:06.658659  593872 config.go:182] Loaded profile config "addons-060912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:06.658701  593872 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 18:53:06.658785  593872 addons.go:69] Setting yakd=true in profile "addons-060912"
	I0920 18:53:06.658801  593872 addons.go:234] Setting addon yakd=true in "addons-060912"
	I0920 18:53:06.658825  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.659339  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.659588  593872 addons.go:69] Setting inspektor-gadget=true in profile "addons-060912"
	I0920 18:53:06.659613  593872 addons.go:234] Setting addon inspektor-gadget=true in "addons-060912"
	I0920 18:53:06.659639  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.660068  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.660634  593872 addons.go:69] Setting cloud-spanner=true in profile "addons-060912"
	I0920 18:53:06.660658  593872 addons.go:234] Setting addon cloud-spanner=true in "addons-060912"
	I0920 18:53:06.660694  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.661122  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.664101  593872 addons.go:69] Setting metrics-server=true in profile "addons-060912"
	I0920 18:53:06.664174  593872 addons.go:234] Setting addon metrics-server=true in "addons-060912"
	I0920 18:53:06.664225  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.664719  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.667132  593872 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-060912"
	I0920 18:53:06.667206  593872 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-060912"
	I0920 18:53:06.667241  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.667711  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.680289  593872 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-060912"
	I0920 18:53:06.680324  593872 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-060912"
	I0920 18:53:06.680367  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.680373  593872 addons.go:69] Setting default-storageclass=true in profile "addons-060912"
	I0920 18:53:06.680394  593872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-060912"
	I0920 18:53:06.680712  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.680844  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.691123  593872 addons.go:69] Setting registry=true in profile "addons-060912"
	I0920 18:53:06.691155  593872 addons.go:234] Setting addon registry=true in "addons-060912"
	I0920 18:53:06.691192  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.691227  593872 addons.go:69] Setting gcp-auth=true in profile "addons-060912"
	I0920 18:53:06.691250  593872 mustload.go:65] Loading cluster: addons-060912
	I0920 18:53:06.691423  593872 config.go:182] Loaded profile config "addons-060912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:06.691663  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.691671  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.711088  593872 addons.go:69] Setting storage-provisioner=true in profile "addons-060912"
	I0920 18:53:06.711123  593872 addons.go:234] Setting addon storage-provisioner=true in "addons-060912"
	I0920 18:53:06.711160  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.711632  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.711881  593872 addons.go:69] Setting ingress=true in profile "addons-060912"
	I0920 18:53:06.711898  593872 addons.go:234] Setting addon ingress=true in "addons-060912"
	I0920 18:53:06.711935  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.712343  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.723158  593872 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-060912"
	I0920 18:53:06.723179  593872 addons.go:69] Setting ingress-dns=true in profile "addons-060912"
	I0920 18:53:06.723193  593872 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-060912"
	I0920 18:53:06.723201  593872 addons.go:234] Setting addon ingress-dns=true in "addons-060912"
	I0920 18:53:06.723253  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.723525  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.723678  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.728597  593872 addons.go:69] Setting volcano=true in profile "addons-060912"
	I0920 18:53:06.728639  593872 addons.go:234] Setting addon volcano=true in "addons-060912"
	I0920 18:53:06.728679  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.729150  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.748969  593872 out.go:177] * Verifying Kubernetes components...
	I0920 18:53:06.760944  593872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:53:06.763463  593872 addons.go:69] Setting volumesnapshots=true in profile "addons-060912"
	I0920 18:53:06.763497  593872 addons.go:234] Setting addon volumesnapshots=true in "addons-060912"
	I0920 18:53:06.763545  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.764046  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.800789  593872 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 18:53:06.802865  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 18:53:06.803037  593872 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 18:53:06.803166  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:06.821921  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 18:53:06.824580  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 18:53:06.824689  593872 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 18:53:06.827502  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 18:53:06.830370  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 18:53:06.832993  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 18:53:06.836463  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 18:53:06.889558  593872 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0920 18:53:06.890713  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:53:06.893342  593872 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 18:53:06.893363  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 18:53:06.893428  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:06.919558  593872 addons.go:234] Setting addon default-storageclass=true in "addons-060912"
	I0920 18:53:06.919598  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.923397  593872 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 18:53:06.924653  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.928129  593872 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-060912"
	I0920 18:53:06.936192  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.936680  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.949702  593872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:53:06.931219  593872 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:53:06.949911  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 18:53:06.949984  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:06.950158  593872 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:53:06.953013  593872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:53:06.950351  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 18:53:06.931395  593872 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 18:53:06.955338  593872 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 18:53:06.955415  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	W0920 18:53:06.950474  593872 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 18:53:06.962421  593872 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 18:53:06.962836  593872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:53:06.962853  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:53:06.962918  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:06.950358  593872 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 18:53:06.964337  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:06.979232  593872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 18:53:06.984485  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 18:53:06.979317  593872 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:53:07.001978  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 18:53:07.002102  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.019211  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:07.021298  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.038210  593872 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:53:07.038235  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 18:53:07.038313  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.038929  593872 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 18:53:07.039087  593872 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:53:07.039119  593872 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:53:07.039189  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.058038  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 18:53:07.062112  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 18:53:07.062141  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 18:53:07.062211  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.064857  593872 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 18:53:07.070082  593872 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 18:53:07.070106  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 18:53:07.070177  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.078060  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 18:53:07.078085  593872 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 18:53:07.078158  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.095923  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.105890  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.131430  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.134781  593872 out.go:177]   - Using image docker.io/busybox:stable
	I0920 18:53:07.136896  593872 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 18:53:07.138991  593872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:53:07.139091  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 18:53:07.139160  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.171105  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.173369  593872 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:53:07.173394  593872 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:53:07.173464  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.203565  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.227872  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.240177  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.254474  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.256680  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.271504  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.296820  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.535433  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 18:53:07.535502  593872 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 18:53:07.580640  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:53:07.609002  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:53:07.636963  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 18:53:07.644507  593872 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 18:53:07.644575  593872 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 18:53:07.650358  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:53:07.748416  593872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:53:07.760493  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 18:53:07.760561  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 18:53:07.769526  593872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:53:07.769590  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 18:53:07.772733  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 18:53:07.772813  593872 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 18:53:07.776888  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:53:07.792143  593872 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 18:53:07.792217  593872 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 18:53:07.799482  593872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 18:53:07.799550  593872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 18:53:07.823809  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:53:07.841182  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:53:07.875970  593872 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 18:53:07.876048  593872 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 18:53:07.940871  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 18:53:07.940939  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 18:53:07.944185  593872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:53:07.944261  593872 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:53:07.969102  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 18:53:07.969176  593872 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 18:53:08.008762  593872 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:53:08.008849  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 18:53:08.024429  593872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 18:53:08.024500  593872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 18:53:08.101275  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 18:53:08.101369  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 18:53:08.104605  593872 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 18:53:08.104668  593872 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 18:53:08.125142  593872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:53:08.125223  593872 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:53:08.153003  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:53:08.153081  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 18:53:08.192290  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:53:08.213189  593872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 18:53:08.213258  593872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 18:53:08.237465  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 18:53:08.237541  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 18:53:08.266264  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:53:08.273091  593872 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 18:53:08.273160  593872 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 18:53:08.295064  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:53:08.337669  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 18:53:08.337737  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 18:53:08.361168  593872 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 18:53:08.361244  593872 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 18:53:08.381395  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 18:53:08.381462  593872 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 18:53:08.434137  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 18:53:08.434209  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 18:53:08.476175  593872 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 18:53:08.476243  593872 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 18:53:08.523767  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 18:53:08.523881  593872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 18:53:08.546597  593872 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:53:08.546686  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 18:53:08.570312  593872 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:53:08.570338  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 18:53:08.601983  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 18:53:08.602012  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 18:53:08.688455  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:53:08.767479  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:53:08.771253  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 18:53:08.771280  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 18:53:08.909646  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:53:08.909674  593872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 18:53:09.083120  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:53:09.366397  593872 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.475641839s)
	I0920 18:53:09.366431  593872 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 18:53:10.656771  593872 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-060912" context rescaled to 1 replicas
	I0920 18:53:12.612519  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.031804896s)
	I0920 18:53:12.612722  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.924224384s)
	I0920 18:53:12.612735  593872 addons.go:475] Verifying addon ingress=true in "addons-060912"
	I0920 18:53:12.612613  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.975577954s)
	I0920 18:53:12.612622  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.96220398s)
	I0920 18:53:12.612632  593872 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.864142489s)
	I0920 18:53:12.613848  593872 node_ready.go:35] waiting up to 6m0s for node "addons-060912" to be "Ready" ...
	I0920 18:53:12.612640  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.835693426s)
	I0920 18:53:12.612649  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.788768719s)
	I0920 18:53:12.612677  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.771429754s)
	I0920 18:53:12.612686  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.420319709s)
	I0920 18:53:12.614217  593872 addons.go:475] Verifying addon registry=true in "addons-060912"
	I0920 18:53:12.612700  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.346358677s)
	I0920 18:53:12.612710  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.317571088s)
	I0920 18:53:12.612603  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.003520156s)
	I0920 18:53:12.614629  593872 addons.go:475] Verifying addon metrics-server=true in "addons-060912"
	I0920 18:53:12.615210  593872 out.go:177] * Verifying ingress addon...
	I0920 18:53:12.616733  593872 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-060912 service yakd-dashboard -n yakd-dashboard
	
	I0920 18:53:12.616811  593872 out.go:177] * Verifying registry addon...
	I0920 18:53:12.618820  593872 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 18:53:12.621444  593872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 18:53:12.657275  593872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:53:12.657379  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:12.659859  593872 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 18:53:12.659933  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0920 18:53:12.681416  593872 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 18:53:12.767378  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.999849337s)
	W0920 18:53:12.767492  593872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:53:12.767543  593872 retry.go:31] will retry after 146.594076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:53:12.914870  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:53:13.020525  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.937354531s)
	I0920 18:53:13.020615  593872 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-060912"
	I0920 18:53:13.023641  593872 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 18:53:13.026354  593872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 18:53:13.059604  593872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:53:13.059630  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:13.157073  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:13.158584  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:13.530916  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:13.623039  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:13.625710  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:14.031109  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:14.132801  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:14.133257  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:14.536672  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:14.617333  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:14.625172  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:14.626190  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:15.037784  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:15.139083  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:15.140081  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:15.530807  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:15.625424  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:15.627060  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:15.811713  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.896729674s)
	I0920 18:53:16.030500  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:16.131340  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:16.132189  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:16.245904  593872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 18:53:16.246064  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:16.271250  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:16.400062  593872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 18:53:16.420647  593872 addons.go:234] Setting addon gcp-auth=true in "addons-060912"
	I0920 18:53:16.420709  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:16.421221  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:16.453077  593872 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 18:53:16.453133  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:16.472122  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:16.530200  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:16.594150  593872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:53:16.595930  593872 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 18:53:16.598033  593872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 18:53:16.598096  593872 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 18:53:16.617576  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:16.623818  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:16.627395  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:16.654047  593872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 18:53:16.654122  593872 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 18:53:16.675521  593872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:53:16.675615  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 18:53:16.696169  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:53:17.030750  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:17.123475  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:17.129949  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:17.398772  593872 addons.go:475] Verifying addon gcp-auth=true in "addons-060912"
	I0920 18:53:17.400714  593872 out.go:177] * Verifying gcp-auth addon...
	I0920 18:53:17.403254  593872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 18:53:17.424327  593872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 18:53:17.424348  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:17.530789  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:17.622216  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:17.624897  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:17.908276  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:18.032558  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:18.123585  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:18.125296  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:18.409068  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:18.535824  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:18.619988  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:18.631248  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:18.632258  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:18.906952  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:19.031275  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:19.123974  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:19.125329  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:19.407041  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:19.530437  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:19.623209  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:19.626778  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:19.907358  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:20.031410  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:20.124451  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:20.127885  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:20.408163  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:20.530304  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:20.624641  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:20.627521  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:20.640571  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:20.906590  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:21.030716  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:21.123396  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:21.125966  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:21.407311  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:21.530248  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:21.632768  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:21.633772  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:21.907461  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:22.030655  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:22.122491  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:22.124032  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:22.407518  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:22.529985  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:22.627170  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:22.627503  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:22.906531  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:23.030537  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:23.117862  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:23.122924  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:23.124460  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:23.406221  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:23.530623  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:23.622743  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:23.624225  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:23.906508  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:24.030964  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:24.123359  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:24.124718  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:24.406947  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:24.530397  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:24.622656  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:24.625089  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:24.906346  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:25.030863  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:25.123281  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:25.124762  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:25.406740  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:25.529934  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:25.618285  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:25.623223  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:25.625275  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:25.907343  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:26.029874  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:26.122880  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:26.125558  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:26.406428  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:26.529876  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:26.622569  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:26.624263  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:26.907876  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:27.030892  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:27.122763  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:27.125752  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:27.407385  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:27.529890  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:27.623664  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:27.625235  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:27.906932  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:28.031546  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:28.117142  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:28.124003  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:28.125268  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:28.407350  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:28.530316  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:28.622496  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:28.625367  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:28.906550  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:29.030563  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:29.123518  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:29.125027  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:29.406272  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:29.530318  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:29.623487  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:29.626002  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:29.907054  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:30.034955  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:30.117886  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:30.123770  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:30.130136  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:30.406799  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:30.530068  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:30.623431  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:30.625815  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:30.906721  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:31.030766  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:31.122470  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:31.125131  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:31.406466  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:31.530019  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:31.623357  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:31.625132  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:31.906429  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:32.030130  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:32.122724  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:32.125475  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:32.406950  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:32.530073  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:32.617262  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:32.623567  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:32.624606  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:32.906774  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:33.030618  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:33.122463  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:33.124352  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:33.406566  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:33.529976  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:33.623194  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:33.625569  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:33.906893  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:34.030568  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:34.124214  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:34.125664  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:34.406789  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:34.530093  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:34.617503  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:34.622267  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:34.624640  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:34.906716  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:35.030814  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:35.122711  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:35.124484  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:35.406906  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:35.530090  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:35.628989  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:35.643448  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:35.907749  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:36.033222  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:36.123115  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:36.125074  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:36.406478  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:36.530372  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:36.617537  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:36.623124  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:36.624707  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:36.907705  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:37.032609  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:37.122402  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:37.124436  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:37.410158  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:37.530964  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:37.623290  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:37.624638  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:37.908427  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:38.032501  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:38.123432  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:38.125097  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:38.407006  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:38.531090  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:38.617637  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:38.623531  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:38.624757  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:38.907994  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:39.030429  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:39.122831  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:39.125472  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:39.406900  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:39.530683  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:39.622682  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:39.625408  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:39.906436  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:40.032512  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:40.122833  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:40.125873  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:40.407481  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:40.530433  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:40.623305  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:40.625601  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:40.907104  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:41.030489  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:41.117535  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:41.123596  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:41.125740  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:41.408742  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:41.530414  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:41.623195  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:41.624942  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:41.906278  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:42.030219  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:42.124663  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:42.126897  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:42.406451  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:42.529842  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:42.623685  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:42.624861  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:42.907530  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:43.030270  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:43.118084  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:43.122827  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:43.124122  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:43.406531  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:43.530126  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:43.623043  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:43.624496  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:43.906195  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:44.030547  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:44.123616  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:44.124870  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:44.407296  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:44.530337  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:44.623165  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:44.624362  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:44.906714  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:45.030883  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:45.127738  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:45.130246  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:45.407317  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:45.530858  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:45.617283  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:45.623572  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:45.626121  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:45.907349  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:46.029896  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:46.122967  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:46.124510  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:46.406878  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:46.529797  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:46.623467  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:46.625901  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:46.907092  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:47.029591  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:47.123043  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:47.124598  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:47.407203  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:47.530170  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:47.617840  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:47.622988  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:47.625052  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:47.906284  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:48.030282  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:48.123543  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:48.125837  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:48.407567  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:48.530061  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:48.622624  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:48.624101  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:48.906319  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:49.029685  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:49.123390  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:49.125759  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:49.406675  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:49.530424  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:49.623059  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:49.624375  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:49.914789  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:50.073315  593872 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:53:50.073346  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:50.177542  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:50.178097  593872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:53:50.178119  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:50.178911  593872 node_ready.go:49] node "addons-060912" has status "Ready":"True"
	I0920 18:53:50.178932  593872 node_ready.go:38] duration metric: took 37.565064524s for node "addons-060912" to be "Ready" ...
	I0920 18:53:50.178943  593872 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:53:50.209529  593872 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cl27s" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:50.412871  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:50.534356  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:50.633995  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:50.635103  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:50.926509  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:51.040298  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:51.123773  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:51.127158  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:51.407040  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:51.532755  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:51.632804  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:51.634087  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:51.932590  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:52.032350  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:52.124423  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:52.129016  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:52.216901  593872 pod_ready.go:93] pod "coredns-7c65d6cfc9-cl27s" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.216927  593872 pod_ready.go:82] duration metric: took 2.007357992s for pod "coredns-7c65d6cfc9-cl27s" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.216954  593872 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.227598  593872 pod_ready.go:93] pod "etcd-addons-060912" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.227626  593872 pod_ready.go:82] duration metric: took 10.663807ms for pod "etcd-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.227642  593872 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.233476  593872 pod_ready.go:93] pod "kube-apiserver-addons-060912" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.233503  593872 pod_ready.go:82] duration metric: took 5.853067ms for pod "kube-apiserver-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.233518  593872 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.239607  593872 pod_ready.go:93] pod "kube-controller-manager-addons-060912" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.239631  593872 pod_ready.go:82] duration metric: took 6.104882ms for pod "kube-controller-manager-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.239646  593872 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c522g" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.245402  593872 pod_ready.go:93] pod "kube-proxy-c522g" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.245429  593872 pod_ready.go:82] duration metric: took 5.77497ms for pod "kube-proxy-c522g" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.245442  593872 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.407590  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:52.532029  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:52.614340  593872 pod_ready.go:93] pod "kube-scheduler-addons-060912" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.614364  593872 pod_ready.go:82] duration metric: took 368.914093ms for pod "kube-scheduler-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.614376  593872 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.628872  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:52.630785  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:52.907684  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:53.032348  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:53.123194  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:53.125921  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:53.407116  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:53.531905  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:53.632352  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:53.633355  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:53.907223  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:54.031444  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:54.123369  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:54.125452  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:54.406161  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:54.531797  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:54.621522  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:53:54.625378  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:54.626368  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:54.908405  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:55.033311  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:55.129235  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:55.136020  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:55.407641  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:55.532754  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:55.629988  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:55.630688  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:55.908033  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:56.032956  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:56.131881  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:56.135239  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:56.407252  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:56.532357  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:56.634496  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:56.634762  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:56.675444  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:53:56.907632  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:57.032216  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:57.124438  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:57.129823  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:57.407619  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:57.531487  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:57.629839  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:57.629800  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:57.908525  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:58.032421  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:58.127448  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:58.128931  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:58.406845  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:58.532378  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:58.629947  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:58.637452  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:58.907318  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:59.038238  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:59.123250  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:53:59.123692  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:59.124754  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:59.407399  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:59.531831  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:59.625264  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:59.627267  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:59.907619  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:00.040374  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:00.143816  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:00.162956  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:00.414806  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:00.535472  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:00.638003  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:00.654461  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:00.906727  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:01.033760  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:01.122760  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:01.127364  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:01.407190  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:01.531912  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:01.620649  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:01.623645  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:01.625824  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:01.907353  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:02.031678  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:02.127559  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:02.136258  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:02.407697  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:02.532649  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:02.624581  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:02.626864  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:02.907243  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:03.031855  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:03.124370  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:03.125997  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:03.406572  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:03.531774  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:03.622198  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:03.623803  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:03.626096  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:03.906862  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:04.034339  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:04.123394  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:04.125057  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:04.406456  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:04.531188  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:04.624236  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:04.625251  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:04.907437  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:05.034015  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:05.136328  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:05.140535  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:05.407750  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:05.531688  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:05.622928  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:05.625977  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:05.628216  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:05.907392  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:06.035534  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:06.137035  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:06.140689  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:06.407645  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:06.532556  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:06.631129  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:06.637720  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:06.907369  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:07.033152  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:07.131047  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:07.132304  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:07.407831  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:07.534227  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:07.628560  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:07.630035  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:07.908146  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:08.046766  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:08.128098  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:08.146618  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:08.148909  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:08.406526  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:08.531145  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:08.622943  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:08.625835  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:08.907156  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:09.032047  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:09.125239  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:09.127763  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:09.406510  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:09.535708  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:09.627079  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:09.628475  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:09.908539  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:10.032103  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:10.128287  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:10.130963  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:10.134006  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:10.408230  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:10.537067  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:10.635350  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:10.636775  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:10.909280  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:11.031922  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:11.141473  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:11.143400  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:11.409121  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:11.533393  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:11.624605  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:11.626270  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:11.908376  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:12.033643  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:12.134194  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:12.135720  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:12.139063  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:12.408488  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:12.533149  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:12.625412  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:12.628837  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:12.908197  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:13.039091  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:13.148688  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:13.150626  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:13.407231  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:13.538129  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:13.633910  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:13.634284  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:13.907146  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:14.031963  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:14.138068  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:14.139837  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:14.406746  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:14.532196  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:14.621320  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:14.623936  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:14.625755  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:14.915462  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:15.039044  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:15.151579  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:15.154892  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:15.407061  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:15.532461  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:15.623936  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:15.631335  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:15.907583  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:16.031676  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:16.132063  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:16.132159  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:16.407214  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:16.531877  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:16.622218  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:16.624731  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:16.627246  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:16.907112  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:17.031940  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:17.123879  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:17.125637  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:17.407684  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:17.531652  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:17.623237  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:17.624952  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:17.907148  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:18.032472  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:18.124997  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:18.128424  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:18.408608  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:18.533769  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:18.622613  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:18.625361  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:18.626956  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:18.907688  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:19.032365  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:19.126642  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:19.128047  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:19.407217  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:19.532128  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:19.625119  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:19.637814  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:19.908672  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:20.032425  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:20.134537  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:20.138948  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:20.407812  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:20.531792  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:20.623610  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:20.625524  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:20.907557  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:21.032364  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:21.122717  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:21.125484  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:21.128074  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:21.408400  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:21.532268  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:21.626506  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:21.627950  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:21.907994  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:22.032143  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:22.125563  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:22.128530  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:22.407181  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:22.531457  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:22.627992  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:22.630782  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:22.908478  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:23.033385  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:23.150200  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:23.158628  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:23.175368  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:23.407473  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:23.562996  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:23.633408  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:23.649343  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:23.908349  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:24.051429  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:24.130915  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:24.133182  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:24.407528  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:24.534113  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:24.624993  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:24.625264  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:24.906353  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:25.031654  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:25.125434  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:25.125905  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:25.407969  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:25.532605  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:25.630968  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:25.632259  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:25.636454  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:25.907689  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:26.036437  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:26.130115  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:26.132311  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:26.407831  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:26.532065  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:26.632353  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:26.635310  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:26.907144  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:27.031562  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:27.126473  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:27.129258  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:27.407457  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:27.534341  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:27.628767  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:27.630758  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:27.906634  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:28.032860  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:28.133735  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:28.135066  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:28.139879  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:28.407295  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:28.530907  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:28.623822  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:28.625241  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:28.908410  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:29.032110  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:29.123334  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:29.125825  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:29.408334  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:29.531931  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:29.635718  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:29.637010  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:29.907747  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:30.032207  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:30.125423  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:30.129378  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:30.429404  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:30.531075  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:30.623099  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:30.623506  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:30.626472  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:30.907454  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:31.031130  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:31.122882  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:31.125933  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:31.409131  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:31.536612  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:31.637346  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:31.637939  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:31.909737  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:32.033045  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:32.124243  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:32.125947  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:32.415304  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:32.531436  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:32.623785  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:32.628047  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:32.906594  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:33.032489  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:33.121437  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:33.123577  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:33.126223  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:33.407727  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:33.544797  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:33.639503  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:33.639917  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:33.908501  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:34.042595  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:34.129639  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:34.143896  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:34.408132  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:34.532017  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:34.625507  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:34.625737  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:34.908204  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:35.031539  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:35.122869  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:35.125949  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:35.407190  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:35.531204  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:35.621395  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:35.622954  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:35.627663  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:35.907161  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:36.031668  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:36.123711  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:36.125683  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:36.406719  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:36.531817  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:36.624197  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:36.625728  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:36.906856  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:37.039202  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:37.132125  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:37.139533  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:37.407783  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:37.532343  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:37.627594  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:37.629986  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:37.635302  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:37.907649  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:38.036353  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:38.129070  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:38.141087  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:38.406532  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:38.532632  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:38.629637  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:38.631244  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:38.907315  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:39.032138  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:39.144645  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:39.146049  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:39.410944  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:39.532413  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:39.626005  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:39.634918  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:39.907693  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:40.048331  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:40.135999  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:40.145681  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:40.147455  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:40.420292  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:40.532803  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:40.632950  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:40.633952  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:40.907773  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:41.031222  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:41.124458  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:41.126401  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:41.407738  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:41.541065  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:41.627392  593872 kapi.go:107] duration metric: took 1m29.005936692s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 18:54:41.627849  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:41.907865  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:42.031882  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:42.128072  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:42.136230  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:42.408023  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:42.535450  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:42.628719  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:42.907633  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:43.037631  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:43.126577  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:43.408257  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:43.532830  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:43.622674  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:43.906383  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:44.032885  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:44.130566  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:44.412103  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:44.531674  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:44.624493  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:44.625354  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:44.907163  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:45.041059  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:45.146171  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:45.409090  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:45.538109  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:45.625064  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:45.906825  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:46.032091  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:46.126749  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:46.408000  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:46.532548  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:46.625356  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:46.906911  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:47.032115  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:47.126363  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:47.126613  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:47.408116  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:47.537259  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:47.631788  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:47.906519  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:48.032902  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:48.124487  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:48.407049  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:48.531900  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:48.623826  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:48.907268  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:49.032168  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:49.124770  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:49.407794  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:49.532588  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:49.621573  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:49.624122  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:49.907138  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:50.031140  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:50.125351  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:50.407077  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:50.531766  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:50.623649  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:50.906983  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:51.031224  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:51.133790  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:51.407000  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:51.532440  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:51.621696  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:51.629042  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:51.910023  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:52.034638  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:52.131483  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:52.407512  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:52.531145  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:52.624175  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:52.906583  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:53.031793  593872 kapi.go:107] duration metric: took 1m40.005442028s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 18:54:53.123528  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:53.407310  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:53.621853  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:53.624084  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:53.907521  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:54.125743  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:54.406985  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:54.624565  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:54.907009  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:55.123603  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:55.414564  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:55.633000  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:55.634997  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:55.907186  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:56.129208  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:56.409356  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:56.626668  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:56.907820  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:57.127443  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:57.407400  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:57.633052  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:57.636316  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:57.907158  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:58.124639  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:58.408153  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:58.628065  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:58.906454  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:59.137305  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:59.409662  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:59.625505  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:59.908145  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:00.234020  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:55:00.240334  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:00.412169  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:00.638990  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:55:00.907738  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:01.137609  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:55:01.408120  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:01.625029  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:55:01.908497  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:02.123863  593872 kapi.go:107] duration metric: took 1m49.505042131s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 18:55:02.407797  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:02.620522  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:02.908784  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:03.409791  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:03.906980  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:04.408196  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:04.628352  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:04.908939  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:05.407593  593872 kapi.go:107] duration metric: took 1m48.004337441s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 18:55:05.410378  593872 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-060912 cluster.
	I0920 18:55:05.412210  593872 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 18:55:05.414346  593872 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 18:55:05.416700  593872 out.go:177] * Enabled addons: inspektor-gadget, cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 18:55:05.419147  593872 addons.go:510] duration metric: took 1m58.760444537s for enable addons: enabled=[inspektor-gadget cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 18:55:07.120857  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:09.122235  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:11.123435  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:13.620855  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:15.621324  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:18.121934  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:20.620894  593872 pod_ready.go:93] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"True"
	I0920 18:55:20.620923  593872 pod_ready.go:82] duration metric: took 1m28.006539781s for pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace to be "Ready" ...
	I0920 18:55:20.620936  593872 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6c4pc" in "kube-system" namespace to be "Ready" ...
	I0920 18:55:20.626791  593872 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6c4pc" in "kube-system" namespace has status "Ready":"True"
	I0920 18:55:20.626827  593872 pod_ready.go:82] duration metric: took 5.883525ms for pod "nvidia-device-plugin-daemonset-6c4pc" in "kube-system" namespace to be "Ready" ...
	I0920 18:55:20.626855  593872 pod_ready.go:39] duration metric: took 1m30.447894207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:55:20.626873  593872 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:55:20.626917  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:55:20.627002  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:55:20.683602  593872 cri.go:89] found id: "8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:20.683673  593872 cri.go:89] found id: ""
	I0920 18:55:20.683688  593872 logs.go:276] 1 containers: [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef]
	I0920 18:55:20.683760  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.687980  593872 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:55:20.688058  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:55:20.725151  593872 cri.go:89] found id: "ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:20.725197  593872 cri.go:89] found id: ""
	I0920 18:55:20.725206  593872 logs.go:276] 1 containers: [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395]
	I0920 18:55:20.725263  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.728863  593872 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:55:20.728936  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:55:20.768741  593872 cri.go:89] found id: "1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:20.768764  593872 cri.go:89] found id: ""
	I0920 18:55:20.768772  593872 logs.go:276] 1 containers: [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f]
	I0920 18:55:20.768830  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.773058  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:55:20.773130  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:55:20.811084  593872 cri.go:89] found id: "0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:20.811108  593872 cri.go:89] found id: ""
	I0920 18:55:20.811117  593872 logs.go:276] 1 containers: [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f]
	I0920 18:55:20.811173  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.814706  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:55:20.814779  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:55:20.856300  593872 cri.go:89] found id: "6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:20.856326  593872 cri.go:89] found id: ""
	I0920 18:55:20.856334  593872 logs.go:276] 1 containers: [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5]
	I0920 18:55:20.856389  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.860484  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:55:20.860560  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:55:20.902306  593872 cri.go:89] found id: "4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:20.902329  593872 cri.go:89] found id: ""
	I0920 18:55:20.902347  593872 logs.go:276] 1 containers: [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831]
	I0920 18:55:20.902405  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.905966  593872 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:55:20.906048  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:55:20.949793  593872 cri.go:89] found id: "b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:20.949815  593872 cri.go:89] found id: ""
	I0920 18:55:20.949823  593872 logs.go:276] 1 containers: [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b]
	I0920 18:55:20.949881  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.953468  593872 logs.go:123] Gathering logs for dmesg ...
	I0920 18:55:20.953498  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:55:20.971004  593872 logs.go:123] Gathering logs for kube-apiserver [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef] ...
	I0920 18:55:20.971114  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:21.056388  593872 logs.go:123] Gathering logs for etcd [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395] ...
	I0920 18:55:21.056425  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:21.104981  593872 logs.go:123] Gathering logs for coredns [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f] ...
	I0920 18:55:21.105015  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:21.151277  593872 logs.go:123] Gathering logs for kube-controller-manager [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831] ...
	I0920 18:55:21.151308  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:21.229700  593872 logs.go:123] Gathering logs for kindnet [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b] ...
	I0920 18:55:21.229738  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:21.276985  593872 logs.go:123] Gathering logs for kubelet ...
	I0920 18:55:21.277013  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:55:21.366118  593872 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:55:21.366161  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:55:21.585779  593872 logs.go:123] Gathering logs for kube-scheduler [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f] ...
	I0920 18:55:21.585813  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:21.630226  593872 logs.go:123] Gathering logs for kube-proxy [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5] ...
	I0920 18:55:21.630253  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:21.675630  593872 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:55:21.675658  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:55:21.774311  593872 logs.go:123] Gathering logs for container status ...
	I0920 18:55:21.774353  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:55:24.342050  593872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:55:24.356377  593872 api_server.go:72] duration metric: took 2m17.697948817s to wait for apiserver process to appear ...
	I0920 18:55:24.356407  593872 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:55:24.356442  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:55:24.356512  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:55:24.396349  593872 cri.go:89] found id: "8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:24.396374  593872 cri.go:89] found id: ""
	I0920 18:55:24.396383  593872 logs.go:276] 1 containers: [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef]
	I0920 18:55:24.396440  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.400025  593872 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:55:24.400103  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:55:24.437632  593872 cri.go:89] found id: "ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:24.437656  593872 cri.go:89] found id: ""
	I0920 18:55:24.437665  593872 logs.go:276] 1 containers: [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395]
	I0920 18:55:24.437765  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.441226  593872 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:55:24.441310  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:55:24.480492  593872 cri.go:89] found id: "1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:24.480515  593872 cri.go:89] found id: ""
	I0920 18:55:24.480523  593872 logs.go:276] 1 containers: [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f]
	I0920 18:55:24.480588  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.484432  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:55:24.484514  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:55:24.534785  593872 cri.go:89] found id: "0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:24.534810  593872 cri.go:89] found id: ""
	I0920 18:55:24.534819  593872 logs.go:276] 1 containers: [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f]
	I0920 18:55:24.534880  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.538697  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:55:24.538963  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:55:24.588756  593872 cri.go:89] found id: "6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:24.588780  593872 cri.go:89] found id: ""
	I0920 18:55:24.588789  593872 logs.go:276] 1 containers: [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5]
	I0920 18:55:24.588877  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.592738  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:55:24.592830  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:55:24.634956  593872 cri.go:89] found id: "4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:24.634979  593872 cri.go:89] found id: ""
	I0920 18:55:24.634987  593872 logs.go:276] 1 containers: [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831]
	I0920 18:55:24.635066  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.638509  593872 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:55:24.638580  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:55:24.682689  593872 cri.go:89] found id: "b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:24.682712  593872 cri.go:89] found id: ""
	I0920 18:55:24.682720  593872 logs.go:276] 1 containers: [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b]
	I0920 18:55:24.682778  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.686419  593872 logs.go:123] Gathering logs for kube-controller-manager [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831] ...
	I0920 18:55:24.686490  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:24.769481  593872 logs.go:123] Gathering logs for kube-apiserver [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef] ...
	I0920 18:55:24.769516  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:24.824413  593872 logs.go:123] Gathering logs for etcd [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395] ...
	I0920 18:55:24.824464  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:24.873507  593872 logs.go:123] Gathering logs for coredns [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f] ...
	I0920 18:55:24.873540  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:24.928565  593872 logs.go:123] Gathering logs for kube-scheduler [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f] ...
	I0920 18:55:24.928603  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:24.972207  593872 logs.go:123] Gathering logs for kube-proxy [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5] ...
	I0920 18:55:24.972240  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:25.034067  593872 logs.go:123] Gathering logs for container status ...
	I0920 18:55:25.034101  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:55:25.088479  593872 logs.go:123] Gathering logs for kubelet ...
	I0920 18:55:25.088515  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:55:25.180642  593872 logs.go:123] Gathering logs for dmesg ...
	I0920 18:55:25.180679  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:55:25.197983  593872 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:55:25.198018  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:55:25.348415  593872 logs.go:123] Gathering logs for kindnet [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b] ...
	I0920 18:55:25.348488  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:25.396676  593872 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:55:25.396702  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:55:27.999369  593872 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 18:55:28.011064  593872 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 18:55:28.012493  593872 api_server.go:141] control plane version: v1.31.1
	I0920 18:55:28.012529  593872 api_server.go:131] duration metric: took 3.656113679s to wait for apiserver health ...
	I0920 18:55:28.012540  593872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:55:28.012573  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:55:28.012671  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:55:28.054623  593872 cri.go:89] found id: "8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:28.054647  593872 cri.go:89] found id: ""
	I0920 18:55:28.054656  593872 logs.go:276] 1 containers: [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef]
	I0920 18:55:28.054716  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.058765  593872 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:55:28.058859  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:55:28.103813  593872 cri.go:89] found id: "ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:28.103835  593872 cri.go:89] found id: ""
	I0920 18:55:28.103843  593872 logs.go:276] 1 containers: [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395]
	I0920 18:55:28.103902  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.107830  593872 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:55:28.107903  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:55:28.156157  593872 cri.go:89] found id: "1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:28.156183  593872 cri.go:89] found id: ""
	I0920 18:55:28.156191  593872 logs.go:276] 1 containers: [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f]
	I0920 18:55:28.156248  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.160447  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:55:28.160566  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:55:28.201058  593872 cri.go:89] found id: "0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:28.201081  593872 cri.go:89] found id: ""
	I0920 18:55:28.201089  593872 logs.go:276] 1 containers: [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f]
	I0920 18:55:28.201166  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.204832  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:55:28.204932  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:55:28.243472  593872 cri.go:89] found id: "6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:28.243506  593872 cri.go:89] found id: ""
	I0920 18:55:28.243516  593872 logs.go:276] 1 containers: [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5]
	I0920 18:55:28.243582  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.247662  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:55:28.247823  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:55:28.294254  593872 cri.go:89] found id: "4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:28.294288  593872 cri.go:89] found id: ""
	I0920 18:55:28.294297  593872 logs.go:276] 1 containers: [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831]
	I0920 18:55:28.294369  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.297872  593872 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:55:28.297956  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:55:28.336421  593872 cri.go:89] found id: "b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:28.336456  593872 cri.go:89] found id: ""
	I0920 18:55:28.336465  593872 logs.go:276] 1 containers: [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b]
	I0920 18:55:28.336532  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.340282  593872 logs.go:123] Gathering logs for kube-controller-manager [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831] ...
	I0920 18:55:28.340356  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:28.412211  593872 logs.go:123] Gathering logs for kindnet [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b] ...
	I0920 18:55:28.412251  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:28.460209  593872 logs.go:123] Gathering logs for container status ...
	I0920 18:55:28.460238  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:55:28.511508  593872 logs.go:123] Gathering logs for kubelet ...
	I0920 18:55:28.511544  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:55:28.604612  593872 logs.go:123] Gathering logs for etcd [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395] ...
	I0920 18:55:28.604650  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:28.654841  593872 logs.go:123] Gathering logs for coredns [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f] ...
	I0920 18:55:28.654872  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:28.695824  593872 logs.go:123] Gathering logs for kube-scheduler [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f] ...
	I0920 18:55:28.695854  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:28.738546  593872 logs.go:123] Gathering logs for kube-proxy [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5] ...
	I0920 18:55:28.738579  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:28.778897  593872 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:55:28.778928  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:55:28.872309  593872 logs.go:123] Gathering logs for dmesg ...
	I0920 18:55:28.872347  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:55:28.889387  593872 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:55:28.889419  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:55:29.037307  593872 logs.go:123] Gathering logs for kube-apiserver [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef] ...
	I0920 18:55:29.037336  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:31.614602  593872 system_pods.go:59] 18 kube-system pods found
	I0920 18:55:31.614646  593872 system_pods.go:61] "coredns-7c65d6cfc9-cl27s" [04689caf-fd31-41a8-b632-da305d969b77] Running
	I0920 18:55:31.614653  593872 system_pods.go:61] "csi-hostpath-attacher-0" [688a011d-4561-4c00-844b-6aa7f297a0aa] Running
	I0920 18:55:31.614658  593872 system_pods.go:61] "csi-hostpath-resizer-0" [106e8af5-f95f-436e-9fab-304f7ea18617] Running
	I0920 18:55:31.614663  593872 system_pods.go:61] "csi-hostpathplugin-7jhqn" [6803e01f-d3a5-4fe1-b76c-a936b8eb8a69] Running
	I0920 18:55:31.614667  593872 system_pods.go:61] "etcd-addons-060912" [f2728dff-aab5-4b32-bf02-93f8d2b5a6c1] Running
	I0920 18:55:31.614671  593872 system_pods.go:61] "kindnet-tl865" [9c700cfd-066f-47c6-aade-257d64dd87fd] Running
	I0920 18:55:31.614675  593872 system_pods.go:61] "kube-apiserver-addons-060912" [af9cd9b5-fbf4-4bb2-b6b8-58e119cc2e54] Running
	I0920 18:55:31.614679  593872 system_pods.go:61] "kube-controller-manager-addons-060912" [e2b17a09-a56a-42f3-885f-853c02ecc200] Running
	I0920 18:55:31.614683  593872 system_pods.go:61] "kube-ingress-dns-minikube" [1b76bbee-eac5-4d2e-b598-514d3650c987] Running
	I0920 18:55:31.614687  593872 system_pods.go:61] "kube-proxy-c522g" [3a56e42d-23c2-4774-b82c-3c6b2daa3a1f] Running
	I0920 18:55:31.614691  593872 system_pods.go:61] "kube-scheduler-addons-060912" [a6533c75-ea94-4da5-bb5e-7a23d9d92d69] Running
	I0920 18:55:31.614697  593872 system_pods.go:61] "metrics-server-84c5f94fbc-6n52n" [707188cc-7e99-491b-b510-82f0f9320fee] Running
	I0920 18:55:31.614703  593872 system_pods.go:61] "nvidia-device-plugin-daemonset-6c4pc" [70208489-2144-41c7-b72c-895d0344ccd9] Running
	I0920 18:55:31.614706  593872 system_pods.go:61] "registry-66c9cd494c-w8gt6" [ded46fe6-d8da-4546-81fd-d1f1949dcadb] Running
	I0920 18:55:31.614710  593872 system_pods.go:61] "registry-proxy-8ghgp" [5a98470b-31f7-4f1c-9586-f681f375453b] Running
	I0920 18:55:31.614714  593872 system_pods.go:61] "snapshot-controller-56fcc65765-r8g9v" [b22e42d4-0119-4486-b078-a8a3532a14c2] Running
	I0920 18:55:31.614717  593872 system_pods.go:61] "snapshot-controller-56fcc65765-wp8r8" [0aa17fbb-ebc2-41dc-8a5a-de69a6f62b73] Running
	I0920 18:55:31.614725  593872 system_pods.go:61] "storage-provisioner" [76adfe52-d569-4e95-82f8-414bc1dcbc24] Running
	I0920 18:55:31.614731  593872 system_pods.go:74] duration metric: took 3.602185872s to wait for pod list to return data ...
	I0920 18:55:31.614744  593872 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:55:31.617429  593872 default_sa.go:45] found service account: "default"
	I0920 18:55:31.617456  593872 default_sa.go:55] duration metric: took 2.706624ms for default service account to be created ...
	I0920 18:55:31.617465  593872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:55:31.627751  593872 system_pods.go:86] 18 kube-system pods found
	I0920 18:55:31.627789  593872 system_pods.go:89] "coredns-7c65d6cfc9-cl27s" [04689caf-fd31-41a8-b632-da305d969b77] Running
	I0920 18:55:31.627797  593872 system_pods.go:89] "csi-hostpath-attacher-0" [688a011d-4561-4c00-844b-6aa7f297a0aa] Running
	I0920 18:55:31.627803  593872 system_pods.go:89] "csi-hostpath-resizer-0" [106e8af5-f95f-436e-9fab-304f7ea18617] Running
	I0920 18:55:31.627808  593872 system_pods.go:89] "csi-hostpathplugin-7jhqn" [6803e01f-d3a5-4fe1-b76c-a936b8eb8a69] Running
	I0920 18:55:31.627813  593872 system_pods.go:89] "etcd-addons-060912" [f2728dff-aab5-4b32-bf02-93f8d2b5a6c1] Running
	I0920 18:55:31.627817  593872 system_pods.go:89] "kindnet-tl865" [9c700cfd-066f-47c6-aade-257d64dd87fd] Running
	I0920 18:55:31.627821  593872 system_pods.go:89] "kube-apiserver-addons-060912" [af9cd9b5-fbf4-4bb2-b6b8-58e119cc2e54] Running
	I0920 18:55:31.627826  593872 system_pods.go:89] "kube-controller-manager-addons-060912" [e2b17a09-a56a-42f3-885f-853c02ecc200] Running
	I0920 18:55:31.627831  593872 system_pods.go:89] "kube-ingress-dns-minikube" [1b76bbee-eac5-4d2e-b598-514d3650c987] Running
	I0920 18:55:31.627836  593872 system_pods.go:89] "kube-proxy-c522g" [3a56e42d-23c2-4774-b82c-3c6b2daa3a1f] Running
	I0920 18:55:31.627840  593872 system_pods.go:89] "kube-scheduler-addons-060912" [a6533c75-ea94-4da5-bb5e-7a23d9d92d69] Running
	I0920 18:55:31.627844  593872 system_pods.go:89] "metrics-server-84c5f94fbc-6n52n" [707188cc-7e99-491b-b510-82f0f9320fee] Running
	I0920 18:55:31.627863  593872 system_pods.go:89] "nvidia-device-plugin-daemonset-6c4pc" [70208489-2144-41c7-b72c-895d0344ccd9] Running
	I0920 18:55:31.627867  593872 system_pods.go:89] "registry-66c9cd494c-w8gt6" [ded46fe6-d8da-4546-81fd-d1f1949dcadb] Running
	I0920 18:55:31.627873  593872 system_pods.go:89] "registry-proxy-8ghgp" [5a98470b-31f7-4f1c-9586-f681f375453b] Running
	I0920 18:55:31.627879  593872 system_pods.go:89] "snapshot-controller-56fcc65765-r8g9v" [b22e42d4-0119-4486-b078-a8a3532a14c2] Running
	I0920 18:55:31.627884  593872 system_pods.go:89] "snapshot-controller-56fcc65765-wp8r8" [0aa17fbb-ebc2-41dc-8a5a-de69a6f62b73] Running
	I0920 18:55:31.627888  593872 system_pods.go:89] "storage-provisioner" [76adfe52-d569-4e95-82f8-414bc1dcbc24] Running
	I0920 18:55:31.627898  593872 system_pods.go:126] duration metric: took 10.426903ms to wait for k8s-apps to be running ...
	I0920 18:55:31.627918  593872 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:55:31.627995  593872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:55:31.639886  593872 system_svc.go:56] duration metric: took 11.957384ms WaitForService to wait for kubelet
	I0920 18:55:31.639916  593872 kubeadm.go:582] duration metric: took 2m24.981492962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:55:31.639936  593872 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:55:31.643318  593872 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 18:55:31.643354  593872 node_conditions.go:123] node cpu capacity is 2
	I0920 18:55:31.643367  593872 node_conditions.go:105] duration metric: took 3.425286ms to run NodePressure ...
	I0920 18:55:31.643399  593872 start.go:241] waiting for startup goroutines ...
	I0920 18:55:31.643414  593872 start.go:246] waiting for cluster config update ...
	I0920 18:55:31.643431  593872 start.go:255] writing updated cluster config ...
	I0920 18:55:31.643750  593872 ssh_runner.go:195] Run: rm -f paused
	I0920 18:55:31.999069  593872 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:55:32.001537  593872 out.go:177] * Done! kubectl is now configured to use "addons-060912" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.585676985Z" level=info msg="Stopped container e086123aa47ae4a67834831eec7cbd41738ff12e1500df52a1869076d26e678c: kube-system/csi-hostpathplugin-7jhqn/node-driver-registrar" id=8ecc14f4-9fdb-4ac0-a32e-1cffbd577291 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.634302039Z" level=info msg="Stopped container c3fbc41f3df281b9bf2d007a036f7bf375b5434504c31f1a7dcad56a7e7222f2: kube-system/csi-hostpathplugin-7jhqn/csi-snapshotter" id=94dd0d86-23d6-411a-bccc-4515db79a947 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.649208812Z" level=info msg="Stopped container f24656c58f5dd63d548a7a4f9de63df1b25f41e896abc98c4c89a01dfad025d8: kube-system/csi-hostpathplugin-7jhqn/csi-external-health-monitor-controller" id=c8117b85-0cfb-41b0-b8ec-82e0c7dbbb72 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.670498500Z" level=info msg="Stopped container 7513b7e880a0a0c4cf956fc274492e362897ec94c2492000a8116fca2eaeb1de: kube-system/registry-proxy-8ghgp/registry-proxy" id=55924799-4468-4db9-ad80-781154e51d38 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.671676589Z" level=info msg="Stopping pod sandbox: b9bb38ddbd7cf4a308f73d6d51399c218085db0a09df722237bcf13e618a9411" id=80b8db59-d736-4f6a-893c-e90c36995061 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.672573423Z" level=info msg="Stopped pod sandbox: 606954facdfaa241c3635d5b4d47b5d74a00fcc265a094ba1e4ef7b08542319e" id=72b03c06-95f1-48af-ad6a-38eb9150e01d name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.672759975Z" level=info msg="Stopped container 411e55758fb12ae2e5095946b1bbdceb2bc5d8f0e83fc57a604278d6138bee30: kube-system/csi-hostpathplugin-7jhqn/csi-provisioner" id=19c44599-d928-4444-b554-537f62f18256 name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.677395022Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-K453HVBH6RYHPZP7 - [0:0]\n:KUBE-HP-3H6EY67RHFIH2R7J - [0:0]\n:KUBE-HP-NG6UXPTOK7PXYA2S - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xg7x4_ingress-nginx_5991ce09-b48a-4443-b4d7-483c6ff98c74_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-K453HVBH6RYHPZP7\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xg7x4_ingress-nginx_5991ce09-b48a-4443-b4d7-483c6ff98c74_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-3H6EY67RHFIH2R7J\n-A KUBE-HP-3H6EY67RHFIH2R7J -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xg7x4_ingress-nginx_5991ce09-b48a-4443-b4d7-483c6ff98c74_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-3H6EY67RHFIH2R7J -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xg7x4_ingress-nginx_5991ce09-b48a-4443-b4d
7-483c6ff98c74_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.19:80\n-A KUBE-HP-K453HVBH6RYHPZP7 -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xg7x4_ingress-nginx_5991ce09-b48a-4443-b4d7-483c6ff98c74_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-K453HVBH6RYHPZP7 -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xg7x4_ingress-nginx_5991ce09-b48a-4443-b4d7-483c6ff98c74_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.19:443\n-X KUBE-HP-NG6UXPTOK7PXYA2S\nCOMMIT\n"
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.686939069Z" level=info msg="Stopped pod sandbox: 37954897bbf405925f49e55afa60783dd3eeffc3af1137694956881e8b129ad2" id=794211e0-0a5d-4634-8edf-55f4ea6f4d16 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.687800852Z" level=info msg="Stopped container 1ab570de5f651323bfa4b657325189ca49de321b58cebafe08513e6f73031025: kube-system/csi-hostpathplugin-7jhqn/hostpath" id=2edc2c6c-a533-4953-a295-7a1df6fb82bb name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.689388211Z" level=info msg="Stopping pod sandbox: 8819a3a96e31a766c50edbc83cbea1b66630146b3100b1895df72d87a6d543cf" id=95affdbb-16a3-4073-a392-f50e134d791c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.689623353Z" level=info msg="Got pod network &{Name:csi-hostpathplugin-7jhqn Namespace:kube-system ID:8819a3a96e31a766c50edbc83cbea1b66630146b3100b1895df72d87a6d543cf UID:6803e01f-d3a5-4fe1-b76c-a936b8eb8a69 NetNS:/var/run/netns/0d7f28fd-45cd-47e7-8c10-3981ffccc74b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.689753634Z" level=info msg="Deleting pod kube-system_csi-hostpathplugin-7jhqn from CNI network \"kindnet\" (type=ptp)"
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.698093229Z" level=info msg="Removing container: 86a9f1538bea7a89f8a464406bebf67ff04c5f00e82c6c59014d32ccc899f731" id=2c021e0a-1d03-48b6-9017-94fe3825426c name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.704440936Z" level=info msg="Closing host port tcp:5000"
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.721824494Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.722040329Z" level=info msg="Got pod network &{Name:registry-proxy-8ghgp Namespace:kube-system ID:b9bb38ddbd7cf4a308f73d6d51399c218085db0a09df722237bcf13e618a9411 UID:5a98470b-31f7-4f1c-9586-f681f375453b NetNS:/var/run/netns/2c99ea97-a2b2-4c58-8bad-6e27ec646477 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.722176509Z" level=info msg="Deleting pod kube-system_registry-proxy-8ghgp from CNI network \"kindnet\" (type=ptp)"
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.737284480Z" level=info msg="Removed container 86a9f1538bea7a89f8a464406bebf67ff04c5f00e82c6c59014d32ccc899f731: kube-system/csi-hostpath-attacher-0/csi-attacher" id=2c021e0a-1d03-48b6-9017-94fe3825426c name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.738681850Z" level=info msg="Removing container: 5d98a7cd2c68061c87a81d996b4f168fe4798c1b5cca4f9d2aa0bf73f1880957" id=16d81798-c935-40e5-aee9-42e9b6e2d82d name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.752075766Z" level=info msg="Stopped pod sandbox: 8819a3a96e31a766c50edbc83cbea1b66630146b3100b1895df72d87a6d543cf" id=95affdbb-16a3-4073-a392-f50e134d791c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.776004183Z" level=info msg="Removed container 5d98a7cd2c68061c87a81d996b4f168fe4798c1b5cca4f9d2aa0bf73f1880957: kube-system/csi-hostpath-resizer-0/csi-resizer" id=16d81798-c935-40e5-aee9-42e9b6e2d82d name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.780485163Z" level=info msg="Removing container: 963364557a9d0c76d0d6a3a7c19208cab73f895f9721e65c1a649336ce273e54" id=70f98c69-86d7-4492-8632-bd1d2335d3cf name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.819560164Z" level=info msg="Stopped pod sandbox: b9bb38ddbd7cf4a308f73d6d51399c218085db0a09df722237bcf13e618a9411" id=80b8db59-d736-4f6a-893c-e90c36995061 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:04:48 addons-060912 crio[963]: time="2024-09-20 19:04:48.824015372Z" level=info msg="Removed container 963364557a9d0c76d0d6a3a7c19208cab73f895f9721e65c1a649336ce273e54: kube-system/registry-66c9cd494c-w8gt6/registry" id=70f98c69-86d7-4492-8632-bd1d2335d3cf name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	2288d43f2e3ec       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            5 minutes ago       Exited              gadget                                   6                   1ab807f1e0692       gadget-7nhmd
	4a43484742705       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                                 9 minutes ago       Running             gcp-auth                                 0                   41279eea3be85       gcp-auth-89d5ffd79-lnzdp
	85695c2824bbf       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3                             9 minutes ago       Running             controller                               0                   70f947431e6fa       ingress-nginx-controller-bc57996ff-xg7x4
	82b19cfe4aa53       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                                             9 minutes ago       Exited              patch                                    3                   23fd631d639d0       ingress-nginx-admission-patch-fdtb4
	411e55758fb12       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          10 minutes ago      Exited              csi-provisioner                          0                   8819a3a96e31a       csi-hostpathplugin-7jhqn
	e54d7f0760cf4       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            10 minutes ago      Exited              liveness-probe                           0                   8819a3a96e31a       csi-hostpathplugin-7jhqn
	1ab570de5f651       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           10 minutes ago      Exited              hostpath                                 0                   8819a3a96e31a       csi-hostpathplugin-7jhqn
	e086123aa47ae       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                10 minutes ago      Exited              node-driver-registrar                    0                   8819a3a96e31a       csi-hostpathplugin-7jhqn
	7513b7e880a0a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:1f7f8eef6b75f46cf7e603e969c6de93ddc78fa8fbac705441a4b98a85554cad                              10 minutes ago      Exited              registry-proxy                           0                   b9bb38ddbd7cf       registry-proxy-8ghgp
	61ffa12ea9f4d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3                   10 minutes ago      Exited              create                                   0                   5d21c325f34e3       ingress-nginx-admission-create-c2ktk
	0d50a7124ad10       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      10 minutes ago      Running             volume-snapshot-controller               0                   87b0bcb0445cc       snapshot-controller-56fcc65765-wp8r8
	ec4a2ebde1d92       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              10 minutes ago      Running             yakd                                     0                   307ea4d8792ab       yakd-dashboard-67d98fc6b-v89pf
	6c859b6d092c6       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             10 minutes ago      Running             local-path-provisioner                   0                   f2596fcb2f979       local-path-provisioner-86d989889c-4phmr
	36c7137a4230f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             10 minutes ago      Running             minikube-ingress-dns                     0                   d7da5cbaf553a       kube-ingress-dns-minikube
	26abc82a1efc9       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f                        10 minutes ago      Running             metrics-server                           0                   d47ebd6d1ffd2       metrics-server-84c5f94fbc-6n52n
	858722a918b70       gcr.io/cloud-spanner-emulator/emulator@sha256:6ce1265c73355797b34d2531c7146eed3996346f860517e35d1434182eb5f01d                               10 minutes ago      Running             cloud-spanner-emulator                   0                   9b0c7737cb9ee       cloud-spanner-emulator-5b584cc74-77rvl
	f24656c58f5dd       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Exited              csi-external-health-monitor-controller   0                   8819a3a96e31a       csi-hostpathplugin-7jhqn
	b892d5aeaafb2       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                                     10 minutes ago      Running             nvidia-device-plugin-ctr                 0                   cf2072a44be11       nvidia-device-plugin-daemonset-6c4pc
	50e7e14b0a237       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      10 minutes ago      Running             volume-snapshot-controller               0                   da7dc5835a1fb       snapshot-controller-56fcc65765-r8g9v
	b6bb91f96aedc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             10 minutes ago      Running             storage-provisioner                      0                   8bd9bba6c8fc6       storage-provisioner
	1a880bc579bf0       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                                             10 minutes ago      Running             coredns                                  0                   5b3730f2d41b7       coredns-7c65d6cfc9-cl27s
	b8685b3b7a398       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                                             11 minutes ago      Running             kindnet-cni                              0                   a3e64840ab606       kindnet-tl865
	6b08aa03c509c       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                                             11 minutes ago      Running             kube-proxy                               0                   16ec6dded1779       kube-proxy-c522g
	4ecd6cb0f6955       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                                             11 minutes ago      Running             kube-controller-manager                  0                   cf3a116aeab5b       kube-controller-manager-addons-060912
	0f324b0fef4f9       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                                             11 minutes ago      Running             kube-scheduler                           0                   3d36f26aa452e       kube-scheduler-addons-060912
	8bee65ae4a888       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                                             11 minutes ago      Running             kube-apiserver                           0                   33b4572492492       kube-apiserver-addons-060912
	ea2efa9e4710b       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                                             11 minutes ago      Running             etcd                                     0                   f7b5fa9394991       etcd-addons-060912
	
	
	==> coredns [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f] <==
	[INFO] 10.244.0.18:55683 - 36665 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078695s
	[INFO] 10.244.0.18:40033 - 64165 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002757275s
	[INFO] 10.244.0.18:40033 - 10146 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002422465s
	[INFO] 10.244.0.18:44258 - 36529 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00067368s
	[INFO] 10.244.0.18:44258 - 3251 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000648646s
	[INFO] 10.244.0.18:48701 - 30933 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000114313s
	[INFO] 10.244.0.18:48701 - 3798 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177706s
	[INFO] 10.244.0.18:43291 - 11795 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065912s
	[INFO] 10.244.0.18:43291 - 45806 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000060553s
	[INFO] 10.244.0.18:54945 - 47277 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055015s
	[INFO] 10.244.0.18:54945 - 42927 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080122s
	[INFO] 10.244.0.18:54866 - 8361 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001695403s
	[INFO] 10.244.0.18:54866 - 41643 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001756359s
	[INFO] 10.244.0.18:33956 - 27160 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000067043s
	[INFO] 10.244.0.18:33956 - 20762 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000052537s
	[INFO] 10.244.0.20:52499 - 34827 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000205341s
	[INFO] 10.244.0.20:36942 - 16052 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000363782s
	[INFO] 10.244.0.20:52995 - 29444 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000351162s
	[INFO] 10.244.0.20:44078 - 60085 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000383892s
	[INFO] 10.244.0.20:54831 - 11107 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000215918s
	[INFO] 10.244.0.20:42723 - 50453 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000198564s
	[INFO] 10.244.0.20:33980 - 22876 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003708615s
	[INFO] 10.244.0.20:36030 - 39141 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003676779s
	[INFO] 10.244.0.20:46057 - 16877 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005906983s
	[INFO] 10.244.0.20:59156 - 51441 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.005414694s
	
	
	==> describe nodes <==
	Name:               addons-060912
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-060912
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=addons-060912
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_53_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-060912
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:52:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-060912
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:04:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:04:35 +0000   Fri, 20 Sep 2024 18:52:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:04:35 +0000   Fri, 20 Sep 2024 18:52:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:04:35 +0000   Fri, 20 Sep 2024 18:52:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:04:35 +0000   Fri, 20 Sep 2024 18:53:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-060912
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 10dc14ff36a34258b0be727d4ac3c9e0
	  System UUID:                f67c7638-9fc9-4a4c-946b-9e8a422e1126
	  Boot ID:                    b363b069-6c72-47b0-a80b-36cf6b75e261
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m18s
	  default                     cloud-spanner-emulator-5b584cc74-77rvl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-7nhmd                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-lnzdp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-xg7x4    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-cl27s                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-addons-060912                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-tl865                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-060912                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-060912       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-c522g                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-060912                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-6n52n             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-6c4pc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-r8g9v        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-wp8r8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-4phmr     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-v89pf              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-060912 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-060912 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-060912 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node addons-060912 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node addons-060912 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node addons-060912 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node addons-060912 event: Registered Node addons-060912 in Controller
	  Normal   NodeReady                11m                kubelet          Node addons-060912 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395] <==
	{"level":"info","ts":"2024-09-20T18:52:55.959369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-20T18:52:55.959404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T18:52:55.963242Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-060912 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:52:55.963430Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:52:55.963737Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:55.967032Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:52:55.967327Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:52:55.967356Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:52:55.967795Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:52:55.967940Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:52:55.968794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:52:55.971183Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:55.975635Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:55.975703Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:55.979680Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T18:53:07.939493Z","caller":"traceutil/trace.go:171","msg":"trace[436272735] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"123.951195ms","start":"2024-09-20T18:53:07.815524Z","end":"2024-09-20T18:53:07.939475Z","steps":["trace[436272735] 'process raft request'  (duration: 87.482157ms)","trace[436272735] 'compare'  (duration: 36.052588ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:53:08.037242Z","caller":"traceutil/trace.go:171","msg":"trace[849823426] linearizableReadLoop","detail":"{readStateIndex:392; appliedIndex:391; }","duration":"221.629804ms","start":"2024-09-20T18:53:07.815591Z","end":"2024-09-20T18:53:08.037220Z","steps":["trace[849823426] 'read index received'  (duration: 447.974µs)","trace[849823426] 'applied index is now lower than readState.Index'  (duration: 221.179918ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:53:08.037369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.740244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-09-20T18:53:08.159742Z","caller":"traceutil/trace.go:171","msg":"trace[402661395] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:385; }","duration":"282.386041ms","start":"2024-09-20T18:53:07.815587Z","end":"2024-09-20T18:53:08.097973Z","steps":["trace[402661395] 'agreement among raft nodes before linearized reading'  (duration: 221.690792ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:53:08.159844Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:53:07.815565Z","time spent":"344.255539ms","remote":"127.0.0.1:37374","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":636,"request content":"key:\"/registry/configmaps/kube-system/coredns\" "}
	{"level":"warn","ts":"2024-09-20T18:53:09.679754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.759103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4034"}
	{"level":"info","ts":"2024-09-20T18:53:09.680079Z","caller":"traceutil/trace.go:171","msg":"trace[154019258] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:389; }","duration":"118.292131ms","start":"2024-09-20T18:53:09.561774Z","end":"2024-09-20T18:53:09.680066Z","steps":["trace[154019258] 'range keys from in-memory index tree'  (duration: 117.688178ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:02:56.467684Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1571}
	{"level":"info","ts":"2024-09-20T19:02:56.500782Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1571,"took":"32.660846ms","hash":1649838481,"current-db-size-bytes":6402048,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3543040,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-20T19:02:56.500833Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1649838481,"revision":1571,"compact-revision":-1}
	
	
	==> gcp-auth [4a43484742705aed20cd218f80a63f0e4090a96ee4ee0cef03af1f076f0bfd2b] <==
	2024/09/20 18:55:04 GCP Auth Webhook started!
	2024/09/20 18:55:32 Ready to marshal response ...
	2024/09/20 18:55:32 Ready to write response ...
	2024/09/20 18:55:32 Ready to marshal response ...
	2024/09/20 18:55:32 Ready to write response ...
	2024/09/20 18:55:32 Ready to marshal response ...
	2024/09/20 18:55:32 Ready to write response ...
	2024/09/20 19:03:36 Ready to marshal response ...
	2024/09/20 19:03:36 Ready to write response ...
	2024/09/20 19:03:36 Ready to marshal response ...
	2024/09/20 19:03:36 Ready to write response ...
	2024/09/20 19:03:36 Ready to marshal response ...
	2024/09/20 19:03:36 Ready to write response ...
	2024/09/20 19:03:47 Ready to marshal response ...
	2024/09/20 19:03:47 Ready to write response ...
	2024/09/20 19:04:03 Ready to marshal response ...
	2024/09/20 19:04:03 Ready to write response ...
	2024/09/20 19:04:37 Ready to marshal response ...
	2024/09/20 19:04:37 Ready to write response ...
	
	
	==> kernel <==
	 19:04:50 up  2:47,  0 users,  load average: 1.53, 0.77, 1.57
	Linux addons-060912 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b] <==
	I0920 19:02:49.475145       1 main.go:299] handling current node
	I0920 19:02:59.470239       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:02:59.470274       1 main.go:299] handling current node
	I0920 19:03:09.470021       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:03:09.470057       1 main.go:299] handling current node
	I0920 19:03:19.469867       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:03:19.469992       1 main.go:299] handling current node
	I0920 19:03:29.470029       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:03:29.470066       1 main.go:299] handling current node
	I0920 19:03:39.471133       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:03:39.471165       1 main.go:299] handling current node
	I0920 19:03:49.469972       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:03:49.470129       1 main.go:299] handling current node
	I0920 19:03:59.469906       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:03:59.469943       1 main.go:299] handling current node
	I0920 19:04:09.470024       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:04:09.470059       1 main.go:299] handling current node
	I0920 19:04:19.470715       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:04:19.471848       1 main.go:299] handling current node
	I0920 19:04:29.470359       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:04:29.470513       1 main.go:299] handling current node
	I0920 19:04:39.469610       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:04:39.469647       1 main.go:299] handling current node
	I0920 19:04:49.470398       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:04:49.470433       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef] <==
	E0920 18:53:49.741864       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.251.166:443: connect: connection refused" logger="UnhandledError"
	W0920 18:53:49.741936       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.251.166:443: connect: connection refused
	E0920 18:53:49.741950       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.251.166:443: connect: connection refused" logger="UnhandledError"
	W0920 18:53:49.839219       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.251.166:443: connect: connection refused
	E0920 18:53:49.839338       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.251.166:443: connect: connection refused" logger="UnhandledError"
	W0920 18:54:13.103136       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:54:13.103188       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 18:54:13.103224       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:54:13.103281       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 18:54:13.104357       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 18:54:13.104389       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 18:55:20.382573       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 18:55:20.382656       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 18:55:20.383384       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.60.42:443: connect: connection refused" logger="UnhandledError"
	E0920 18:55:20.385944       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.60.42:443: connect: connection refused" logger="UnhandledError"
	E0920 18:55:20.391750       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.60.42:443: connect: connection refused" logger="UnhandledError"
	I0920 18:55:20.475621       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 19:03:36.739548       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.251.149"}
	I0920 19:04:15.141850       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831] <==
	E0920 18:55:05.717810       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 18:55:06.174323       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 18:55:13.996402       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="16.763242ms"
	I0920 18:55:13.996668       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="139.7µs"
	I0920 18:55:20.377898       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="11.836818ms"
	I0920 18:55:20.378196       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="66.437µs"
	I0920 18:55:34.769619       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-060912"
	I0920 19:00:40.185259       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-060912"
	I0920 19:03:36.796988       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="26.854127ms"
	E0920 19:03:36.797121       1 replica_set.go:560] "Unhandled Error" err="sync \"headlamp/headlamp-7b5c95b59d\" failed with pods \"headlamp-7b5c95b59d-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	I0920 19:03:36.840946       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="42.001653ms"
	I0920 19:03:36.861079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="20.022811ms"
	I0920 19:03:36.861244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="53.424µs"
	I0920 19:03:36.878287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="67.553µs"
	I0920 19:03:41.454453       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="109.907µs"
	I0920 19:03:41.483390       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="10.514432ms"
	I0920 19:03:41.484412       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="88.615µs"
	I0920 19:03:48.375572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="19.799µs"
	I0920 19:03:58.513868       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0920 19:04:04.776198       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-060912"
	I0920 19:04:35.089829       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-060912"
	I0920 19:04:48.014968       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="7.967µs"
	I0920 19:04:48.080052       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0920 19:04:48.244739       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0920 19:04:48.957264       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-060912"
	
	
	==> kube-proxy [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5] <==
	I0920 18:53:11.974563       1 server_linux.go:66] "Using iptables proxy"
	I0920 18:53:12.292405       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 18:53:12.292610       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:53:12.395134       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 18:53:12.395264       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:53:12.411910       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:53:12.466911       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:53:12.467076       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:53:12.468804       1 config.go:199] "Starting service config controller"
	I0920 18:53:12.468933       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:53:12.469006       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:53:12.469013       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:53:12.469600       1 config.go:328] "Starting node config controller"
	I0920 18:53:12.469649       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:53:12.569297       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:53:12.579772       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:53:12.619741       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f] <==
	W0920 18:52:59.368286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 18:52:59.368811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.368955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:52:59.369008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.369112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:52:59.369165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.369263       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0920 18:52:59.368369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 18:52:59.369732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.369322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.370408       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:52:59.371930       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 18:52:59.373052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 18:52:59.374756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.374808       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0920 18:52:59.373291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0920 18:52:59.373528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0920 18:52:59.373573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0920 18:52:59.374022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:52:59.376432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.376123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.376613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.376147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.377169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 18:53:00.561937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:04:48 addons-060912 kubelet[1488]: I0920 19:04:48.871224    1488 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6803e01f-d3a5-4fe1-b76c-a936b8eb8a69-mountpoint-dir" (OuterVolumeSpecName: "mountpoint-dir") pod "6803e01f-d3a5-4fe1-b76c-a936b8eb8a69" (UID: "6803e01f-d3a5-4fe1-b76c-a936b8eb8a69"). InnerVolumeSpecName "mountpoint-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 19:04:48 addons-060912 kubelet[1488]: I0920 19:04:48.871251    1488 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6803e01f-d3a5-4fe1-b76c-a936b8eb8a69-socket-dir" (OuterVolumeSpecName: "socket-dir") pod "6803e01f-d3a5-4fe1-b76c-a936b8eb8a69" (UID: "6803e01f-d3a5-4fe1-b76c-a936b8eb8a69"). InnerVolumeSpecName "socket-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 19:04:48 addons-060912 kubelet[1488]: I0920 19:04:48.881860    1488 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6803e01f-d3a5-4fe1-b76c-a936b8eb8a69-kube-api-access-sgvjz" (OuterVolumeSpecName: "kube-api-access-sgvjz") pod "6803e01f-d3a5-4fe1-b76c-a936b8eb8a69" (UID: "6803e01f-d3a5-4fe1-b76c-a936b8eb8a69"). InnerVolumeSpecName "kube-api-access-sgvjz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 19:04:48 addons-060912 kubelet[1488]: I0920 19:04:48.931242    1488 csi_plugin.go:191] kubernetes.io/csi: registrationHandler.DeRegisterPlugin request for plugin hostpath.csi.k8s.io
	Sep 20 19:04:48 addons-060912 kubelet[1488]: I0920 19:04:48.970116    1488 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmxsz\" (UniqueName: \"kubernetes.io/projected/5a98470b-31f7-4f1c-9586-f681f375453b-kube-api-access-zmxsz\") pod \"5a98470b-31f7-4f1c-9586-f681f375453b\" (UID: \"5a98470b-31f7-4f1c-9586-f681f375453b\") "
	Sep 20 19:04:48 addons-060912 kubelet[1488]: I0920 19:04:48.970222    1488 reconciler_common.go:288] "Volume detached for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6803e01f-d3a5-4fe1-b76c-a936b8eb8a69-csi-data-dir\") on node \"addons-060912\" DevicePath \"\""
	Sep 20 19:04:48 addons-060912 kubelet[1488]: I0920 19:04:48.970237    1488 reconciler_common.go:288] "Volume detached for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6803e01f-d3a5-4fe1-b76c-a936b8eb8a69-registration-dir\") on node \"addons-060912\" DevicePath \"\""
	Sep 20 19:04:48 addons-060912 kubelet[1488]: I0920 19:04:48.970249    1488 reconciler_common.go:288] "Volume detached for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6803e01f-d3a5-4fe1-b76c-a936b8eb8a69-mountpoint-dir\") on node \"addons-060912\" DevicePath \"\""
	Sep 20 19:04:48 addons-060912 kubelet[1488]: I0920 19:04:48.970259    1488 reconciler_common.go:288] "Volume detached for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6803e01f-d3a5-4fe1-b76c-a936b8eb8a69-socket-dir\") on node \"addons-060912\" DevicePath \"\""
	Sep 20 19:04:48 addons-060912 kubelet[1488]: I0920 19:04:48.970269    1488 reconciler_common.go:288] "Volume detached for volume \"dev-dir\" (UniqueName: \"kubernetes.io/host-path/6803e01f-d3a5-4fe1-b76c-a936b8eb8a69-dev-dir\") on node \"addons-060912\" DevicePath \"\""
	Sep 20 19:04:48 addons-060912 kubelet[1488]: I0920 19:04:48.970278    1488 reconciler_common.go:288] "Volume detached for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6803e01f-d3a5-4fe1-b76c-a936b8eb8a69-plugins-dir\") on node \"addons-060912\" DevicePath \"\""
	Sep 20 19:04:48 addons-060912 kubelet[1488]: I0920 19:04:48.970287    1488 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sgvjz\" (UniqueName: \"kubernetes.io/projected/6803e01f-d3a5-4fe1-b76c-a936b8eb8a69-kube-api-access-sgvjz\") on node \"addons-060912\" DevicePath \"\""
	Sep 20 19:04:48 addons-060912 kubelet[1488]: I0920 19:04:48.972220    1488 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a98470b-31f7-4f1c-9586-f681f375453b-kube-api-access-zmxsz" (OuterVolumeSpecName: "kube-api-access-zmxsz") pod "5a98470b-31f7-4f1c-9586-f681f375453b" (UID: "5a98470b-31f7-4f1c-9586-f681f375453b"). InnerVolumeSpecName "kube-api-access-zmxsz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 19:04:49 addons-060912 kubelet[1488]: I0920 19:04:49.071532    1488 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zmxsz\" (UniqueName: \"kubernetes.io/projected/5a98470b-31f7-4f1c-9586-f681f375453b-kube-api-access-zmxsz\") on node \"addons-060912\" DevicePath \"\""
	Sep 20 19:04:49 addons-060912 kubelet[1488]: I0920 19:04:49.401765    1488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="106e8af5-f95f-436e-9fab-304f7ea18617" path="/var/lib/kubelet/pods/106e8af5-f95f-436e-9fab-304f7ea18617/volumes"
	Sep 20 19:04:49 addons-060912 kubelet[1488]: I0920 19:04:49.402148    1488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="688a011d-4561-4c00-844b-6aa7f297a0aa" path="/var/lib/kubelet/pods/688a011d-4561-4c00-844b-6aa7f297a0aa/volumes"
	Sep 20 19:04:49 addons-060912 kubelet[1488]: I0920 19:04:49.402504    1488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="928823d3-f613-4b8d-ba49-85f22d55c526" path="/var/lib/kubelet/pods/928823d3-f613-4b8d-ba49-85f22d55c526/volumes"
	Sep 20 19:04:49 addons-060912 kubelet[1488]: I0920 19:04:49.402720    1488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ded46fe6-d8da-4546-81fd-d1f1949dcadb" path="/var/lib/kubelet/pods/ded46fe6-d8da-4546-81fd-d1f1949dcadb/volumes"
	Sep 20 19:04:49 addons-060912 kubelet[1488]: I0920 19:04:49.709795    1488 scope.go:117] "RemoveContainer" containerID="c3fbc41f3df281b9bf2d007a036f7bf375b5434504c31f1a7dcad56a7e7222f2"
	Sep 20 19:04:49 addons-060912 kubelet[1488]: I0920 19:04:49.763371    1488 scope.go:117] "RemoveContainer" containerID="411e55758fb12ae2e5095946b1bbdceb2bc5d8f0e83fc57a604278d6138bee30"
	Sep 20 19:04:49 addons-060912 kubelet[1488]: I0920 19:04:49.788619    1488 scope.go:117] "RemoveContainer" containerID="e54d7f0760cf4c1026dd3fe79d3a2c900431ac3990a15393637422eef6ef8645"
	Sep 20 19:04:49 addons-060912 kubelet[1488]: I0920 19:04:49.816216    1488 scope.go:117] "RemoveContainer" containerID="1ab570de5f651323bfa4b657325189ca49de321b58cebafe08513e6f73031025"
	Sep 20 19:04:49 addons-060912 kubelet[1488]: I0920 19:04:49.845641    1488 scope.go:117] "RemoveContainer" containerID="e086123aa47ae4a67834831eec7cbd41738ff12e1500df52a1869076d26e678c"
	Sep 20 19:04:49 addons-060912 kubelet[1488]: I0920 19:04:49.868524    1488 scope.go:117] "RemoveContainer" containerID="f24656c58f5dd63d548a7a4f9de63df1b25f41e896abc98c4c89a01dfad025d8"
	Sep 20 19:04:49 addons-060912 kubelet[1488]: I0920 19:04:49.904476    1488 scope.go:117] "RemoveContainer" containerID="7513b7e880a0a0c4cf956fc274492e362897ec94c2492000a8116fca2eaeb1de"
	
	
	==> storage-provisioner [b6bb91f96aedcf859be9e5aeb0d364423ca21915d0fb376bd36caefb6936c622] <==
	I0920 18:53:50.915207       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:53:50.945131       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:53:50.945260       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:53:50.953155       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:53:50.953416       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-060912_3157c921-ea39-49b8-87b1-669c9d4d53b9!
	I0920 18:53:50.953624       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e274d82f-245d-49e4-a33f-104ef4bee3c3", APIVersion:"v1", ResourceVersion:"947", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-060912_3157c921-ea39-49b8-87b1-669c9d4d53b9 became leader
	I0920 18:53:51.053580       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-060912_3157c921-ea39-49b8-87b1-669c9d4d53b9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-060912 -n addons-060912
helpers_test.go:261: (dbg) Run:  kubectl --context addons-060912 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-c2ktk ingress-nginx-admission-patch-fdtb4
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-060912 describe pod busybox ingress-nginx-admission-create-c2ktk ingress-nginx-admission-patch-fdtb4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-060912 describe pod busybox ingress-nginx-admission-create-c2ktk ingress-nginx-admission-patch-fdtb4: exit status 1 (102.621547ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-060912/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 18:55:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hgwr8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hgwr8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m19s                   default-scheduler  Successfully assigned default/busybox to addons-060912
	  Normal   Pulling    7m49s (x4 over 9m19s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m49s (x4 over 9m19s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m49s (x4 over 9m19s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m37s (x6 over 9m18s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m12s (x20 over 9m18s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-c2ktk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fdtb4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-060912 describe pod busybox ingress-nginx-admission-create-c2ktk ingress-nginx-admission-patch-fdtb4: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.36s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (152.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-060912 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-060912 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-060912 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4dc44cfd-f6c8-41a8-a794-93a56f249aef] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4dc44cfd-f6c8-41a8-a794-93a56f249aef] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003365982s
I0920 19:05:16.119243  593105 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-060912 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.930954347s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-060912 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-060912 addons disable ingress-dns --alsologtostderr -v=1: (1.393803431s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-060912 addons disable ingress --alsologtostderr -v=1: (7.775573845s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-060912
helpers_test.go:235: (dbg) docker inspect addons-060912:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5",
	        "Created": "2024-09-20T18:52:39.740365125Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 594367,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T18:52:39.865408091Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5/hosts",
	        "LogPath": "/var/lib/docker/containers/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5-json.log",
	        "Name": "/addons-060912",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-060912:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-060912",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/68e9eff537701289758e11436d45f5a20dac5511c49bb17c6c279ea9a0f2ee99-init/diff:/var/lib/docker/overlay2/a92e9e9bba1980ffadfbad04ca227253691a545526e59e24c9fd42023a78d162/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68e9eff537701289758e11436d45f5a20dac5511c49bb17c6c279ea9a0f2ee99/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68e9eff537701289758e11436d45f5a20dac5511c49bb17c6c279ea9a0f2ee99/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68e9eff537701289758e11436d45f5a20dac5511c49bb17c6c279ea9a0f2ee99/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-060912",
	                "Source": "/var/lib/docker/volumes/addons-060912/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-060912",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-060912",
	                "name.minikube.sigs.k8s.io": "addons-060912",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e9d76a1d4f78b17f57be343ce89cd0030fce0fd6b21bfc9013be4de1e162bf8",
	            "SandboxKey": "/var/run/docker/netns/5e9d76a1d4f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-060912": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "01fa9f6b959f74a22901f7d7f124f8f0aa8983b8fa8db0965f1c5571e7649814",
	                    "EndpointID": "a39b41b3ad3e63a6fe1c844d5ffbf7cf765e19876c05de1e6494d1a2189fa00b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-060912",
	                        "f46765527c33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-060912 -n addons-060912
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-060912 logs -n 25: (1.490395387s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| delete  | -p download-only-469167              | download-only-469167   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| start   | -o=json --download-only              | download-only-447269   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | -p download-only-447269              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| delete  | -p download-only-447269              | download-only-447269   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| delete  | -p download-only-469167              | download-only-469167   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| delete  | -p download-only-447269              | download-only-447269   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| start   | --download-only -p                   | download-docker-266880 | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | download-docker-266880               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-266880            | download-docker-266880 | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| start   | --download-only -p                   | binary-mirror-083327   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | binary-mirror-083327                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44087               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-083327              | binary-mirror-083327   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| addons  | enable dashboard -p                  | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | addons-060912                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | addons-060912                        |                        |         |         |                     |                     |
	| start   | -p addons-060912 --wait=true         | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:55 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:03 UTC | 20 Sep 24 19:03 UTC |
	|         | -p addons-060912                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-060912 addons disable         | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:03 UTC | 20 Sep 24 19:03 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-060912 ip                     | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:04 UTC | 20 Sep 24 19:04 UTC |
	| addons  | addons-060912 addons                 | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:04 UTC | 20 Sep 24 19:04 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-060912 addons disable         | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:04 UTC | 20 Sep 24 19:04 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-060912 addons                 | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:04 UTC | 20 Sep 24 19:04 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:05 UTC | 20 Sep 24 19:05 UTC |
	|         | addons-060912                        |                        |         |         |                     |                     |
	| ssh     | addons-060912 ssh curl -s            | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-060912 ip                     | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:07 UTC | 20 Sep 24 19:07 UTC |
	| addons  | addons-060912 addons disable         | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:07 UTC | 20 Sep 24 19:07 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-060912 addons disable         | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:07 UTC | 20 Sep 24 19:07 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:52:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:52:15.407585  593872 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:52:15.407747  593872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:52:15.407757  593872 out.go:358] Setting ErrFile to fd 2...
	I0920 18:52:15.407763  593872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:52:15.408019  593872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
	I0920 18:52:15.408464  593872 out.go:352] Setting JSON to false
	I0920 18:52:15.409334  593872 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9286,"bootTime":1726849050,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 18:52:15.409413  593872 start.go:139] virtualization:  
	I0920 18:52:15.412765  593872 out.go:177] * [addons-060912] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 18:52:15.415653  593872 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:52:15.415768  593872 notify.go:220] Checking for updates...
	I0920 18:52:15.421427  593872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:52:15.424323  593872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	I0920 18:52:15.427237  593872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	I0920 18:52:15.429911  593872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 18:52:15.432646  593872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:52:15.435403  593872 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:52:15.470290  593872 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:52:15.470417  593872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:52:15.520925  593872 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 18:52:15.51145031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:52:15.521041  593872 docker.go:318] overlay module found
	I0920 18:52:15.523900  593872 out.go:177] * Using the docker driver based on user configuration
	I0920 18:52:15.526500  593872 start.go:297] selected driver: docker
	I0920 18:52:15.526517  593872 start.go:901] validating driver "docker" against <nil>
	I0920 18:52:15.526531  593872 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:52:15.527216  593872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:52:15.581330  593872 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 18:52:15.571863527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:52:15.581548  593872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:52:15.581786  593872 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:52:15.584366  593872 out.go:177] * Using Docker driver with root privileges
	I0920 18:52:15.587045  593872 cni.go:84] Creating CNI manager for ""
	I0920 18:52:15.587107  593872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:52:15.587121  593872 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:52:15.587223  593872 start.go:340] cluster config:
	{Name:addons-060912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:52:15.590219  593872 out.go:177] * Starting "addons-060912" primary control-plane node in "addons-060912" cluster
	I0920 18:52:15.592826  593872 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 18:52:15.595652  593872 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 18:52:15.598342  593872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:52:15.598399  593872 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0920 18:52:15.598412  593872 cache.go:56] Caching tarball of preloaded images
	I0920 18:52:15.598446  593872 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 18:52:15.598514  593872 preload.go:172] Found /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0920 18:52:15.598525  593872 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:52:15.598880  593872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/config.json ...
	I0920 18:52:15.598952  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/config.json: {Name:mk641e5e8bae111e7b0856105b10230ca65c9fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:15.614244  593872 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:52:15.614382  593872 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 18:52:15.614407  593872 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 18:52:15.614416  593872 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 18:52:15.614424  593872 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 18:52:15.614429  593872 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 18:52:32.649742  593872 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 18:52:32.649783  593872 cache.go:194] Successfully downloaded all kic artifacts
	I0920 18:52:32.649812  593872 start.go:360] acquireMachinesLock for addons-060912: {Name:mkdf9efeada37d375617519bd8189e870133c61c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:52:32.649937  593872 start.go:364] duration metric: took 105.149µs to acquireMachinesLock for "addons-060912"
	I0920 18:52:32.649968  593872 start.go:93] Provisioning new machine with config: &{Name:addons-060912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:52:32.650096  593872 start.go:125] createHost starting for "" (driver="docker")
	I0920 18:52:32.652781  593872 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 18:52:32.653060  593872 start.go:159] libmachine.API.Create for "addons-060912" (driver="docker")
	I0920 18:52:32.653099  593872 client.go:168] LocalClient.Create starting
	I0920 18:52:32.653230  593872 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem
	I0920 18:52:32.860960  593872 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/cert.pem
	I0920 18:52:33.807141  593872 cli_runner.go:164] Run: docker network inspect addons-060912 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 18:52:33.822909  593872 cli_runner.go:211] docker network inspect addons-060912 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 18:52:33.823003  593872 network_create.go:284] running [docker network inspect addons-060912] to gather additional debugging logs...
	I0920 18:52:33.823041  593872 cli_runner.go:164] Run: docker network inspect addons-060912
	W0920 18:52:33.836862  593872 cli_runner.go:211] docker network inspect addons-060912 returned with exit code 1
	I0920 18:52:33.836897  593872 network_create.go:287] error running [docker network inspect addons-060912]: docker network inspect addons-060912: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-060912 not found
	I0920 18:52:33.836912  593872 network_create.go:289] output of [docker network inspect addons-060912]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-060912 not found
	
	** /stderr **
	I0920 18:52:33.837018  593872 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:52:33.853516  593872 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400048fc60}
	I0920 18:52:33.853561  593872 network_create.go:124] attempt to create docker network addons-060912 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 18:52:33.853624  593872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-060912 addons-060912
	I0920 18:52:33.925138  593872 network_create.go:108] docker network addons-060912 192.168.49.0/24 created
	I0920 18:52:33.925170  593872 kic.go:121] calculated static IP "192.168.49.2" for the "addons-060912" container
	I0920 18:52:33.925251  593872 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 18:52:33.939300  593872 cli_runner.go:164] Run: docker volume create addons-060912 --label name.minikube.sigs.k8s.io=addons-060912 --label created_by.minikube.sigs.k8s.io=true
	I0920 18:52:33.956121  593872 oci.go:103] Successfully created a docker volume addons-060912
	I0920 18:52:33.956221  593872 cli_runner.go:164] Run: docker run --rm --name addons-060912-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-060912 --entrypoint /usr/bin/test -v addons-060912:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 18:52:35.542485  593872 cli_runner.go:217] Completed: docker run --rm --name addons-060912-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-060912 --entrypoint /usr/bin/test -v addons-060912:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (1.586222321s)
	I0920 18:52:35.542517  593872 oci.go:107] Successfully prepared a docker volume addons-060912
	I0920 18:52:35.542537  593872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:52:35.542557  593872 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 18:52:35.542630  593872 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-060912:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 18:52:39.667870  593872 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-060912:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.12519698s)
	I0920 18:52:39.667901  593872 kic.go:203] duration metric: took 4.125341455s to extract preloaded images to volume ...
	W0920 18:52:39.668057  593872 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 18:52:39.668171  593872 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 18:52:39.725179  593872 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-060912 --name addons-060912 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-060912 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-060912 --network addons-060912 --ip 192.168.49.2 --volume addons-060912:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 18:52:40.064748  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Running}}
	I0920 18:52:40.090550  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:52:40.119088  593872 cli_runner.go:164] Run: docker exec addons-060912 stat /var/lib/dpkg/alternatives/iptables
	I0920 18:52:40.194481  593872 oci.go:144] the created container "addons-060912" has a running status.
	I0920 18:52:40.194657  593872 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa...
	I0920 18:52:40.558917  593872 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 18:52:40.602421  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:52:40.629886  593872 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 18:52:40.629905  593872 kic_runner.go:114] Args: [docker exec --privileged addons-060912 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 18:52:40.708677  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:52:40.734009  593872 machine.go:93] provisionDockerMachine start ...
	I0920 18:52:40.734111  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:40.755383  593872 main.go:141] libmachine: Using SSH client type: native
	I0920 18:52:40.755665  593872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:52:40.755687  593872 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:52:40.930414  593872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-060912
	
	I0920 18:52:40.930441  593872 ubuntu.go:169] provisioning hostname "addons-060912"
	I0920 18:52:40.930507  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:40.955848  593872 main.go:141] libmachine: Using SSH client type: native
	I0920 18:52:40.956093  593872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:52:40.956114  593872 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-060912 && echo "addons-060912" | sudo tee /etc/hostname
	I0920 18:52:41.124769  593872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-060912
	
	I0920 18:52:41.124926  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:41.150096  593872 main.go:141] libmachine: Using SSH client type: native
	I0920 18:52:41.150348  593872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:52:41.150366  593872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-060912' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-060912/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-060912' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:52:41.295129  593872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:52:41.295158  593872 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19679-586329/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-586329/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-586329/.minikube}
	I0920 18:52:41.295190  593872 ubuntu.go:177] setting up certificates
	I0920 18:52:41.295203  593872 provision.go:84] configureAuth start
	I0920 18:52:41.295277  593872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-060912
	I0920 18:52:41.317921  593872 provision.go:143] copyHostCerts
	I0920 18:52:41.318013  593872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-586329/.minikube/ca.pem (1082 bytes)
	I0920 18:52:41.318141  593872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-586329/.minikube/cert.pem (1123 bytes)
	I0920 18:52:41.318206  593872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-586329/.minikube/key.pem (1679 bytes)
	I0920 18:52:41.318258  593872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-586329/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca-key.pem org=jenkins.addons-060912 san=[127.0.0.1 192.168.49.2 addons-060912 localhost minikube]
	I0920 18:52:42.112316  593872 provision.go:177] copyRemoteCerts
	I0920 18:52:42.112394  593872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:52:42.112441  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.134267  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:42.242047  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:52:42.271920  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:52:42.299774  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:52:42.328079  593872 provision.go:87] duration metric: took 1.032855668s to configureAuth
	I0920 18:52:42.328107  593872 ubuntu.go:193] setting minikube options for container-runtime
	I0920 18:52:42.328339  593872 config.go:182] Loaded profile config "addons-060912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:52:42.328485  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.347344  593872 main.go:141] libmachine: Using SSH client type: native
	I0920 18:52:42.347620  593872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:52:42.347642  593872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:52:42.592794  593872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:52:42.592858  593872 machine.go:96] duration metric: took 1.858825465s to provisionDockerMachine
	I0920 18:52:42.592883  593872 client.go:171] duration metric: took 9.939773855s to LocalClient.Create
	I0920 18:52:42.592928  593872 start.go:167] duration metric: took 9.939858146s to libmachine.API.Create "addons-060912"
	I0920 18:52:42.592956  593872 start.go:293] postStartSetup for "addons-060912" (driver="docker")
	I0920 18:52:42.592983  593872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:52:42.593088  593872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:52:42.593176  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.610673  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:42.712244  593872 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:52:42.715200  593872 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 18:52:42.715236  593872 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 18:52:42.715248  593872 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 18:52:42.715255  593872 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 18:52:42.715270  593872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-586329/.minikube/addons for local assets ...
	I0920 18:52:42.715339  593872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-586329/.minikube/files for local assets ...
	I0920 18:52:42.715362  593872 start.go:296] duration metric: took 122.386575ms for postStartSetup
	I0920 18:52:42.715678  593872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-060912
	I0920 18:52:42.734222  593872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/config.json ...
	I0920 18:52:42.734515  593872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:52:42.734561  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.751254  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:42.847551  593872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 18:52:42.851990  593872 start.go:128] duration metric: took 10.201875795s to createHost
	I0920 18:52:42.852014  593872 start.go:83] releasing machines lock for "addons-060912", held for 10.20206475s
	I0920 18:52:42.852104  593872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-060912
	I0920 18:52:42.869047  593872 ssh_runner.go:195] Run: cat /version.json
	I0920 18:52:42.869104  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.869386  593872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:52:42.869455  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.899611  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:42.901003  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:43.143986  593872 ssh_runner.go:195] Run: systemctl --version
	I0920 18:52:43.148494  593872 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:52:43.290058  593872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 18:52:43.294460  593872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:52:43.319067  593872 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0920 18:52:43.319189  593872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:52:43.355578  593872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 18:52:43.355601  593872 start.go:495] detecting cgroup driver to use...
	I0920 18:52:43.355665  593872 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 18:52:43.355740  593872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:52:43.372488  593872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:52:43.384584  593872 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:52:43.384660  593872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:52:43.398596  593872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:52:43.413969  593872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:52:43.506921  593872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:52:43.598933  593872 docker.go:233] disabling docker service ...
	I0920 18:52:43.599074  593872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:52:43.619211  593872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:52:43.632097  593872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:52:43.733486  593872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:52:43.832796  593872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:52:43.844479  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:52:43.861973  593872 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:52:43.862048  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.873308  593872 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:52:43.873384  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.884037  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.894744  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.905984  593872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:52:43.916341  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.926330  593872 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.942760  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.952451  593872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:52:43.961121  593872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:52:43.969336  593872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:52:44.051836  593872 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:52:44.177573  593872 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:52:44.177688  593872 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:52:44.181787  593872 start.go:563] Will wait 60s for crictl version
	I0920 18:52:44.181856  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:52:44.185690  593872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:52:44.231062  593872 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0920 18:52:44.231227  593872 ssh_runner.go:195] Run: crio --version
	I0920 18:52:44.269973  593872 ssh_runner.go:195] Run: crio --version
	I0920 18:52:44.310781  593872 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0920 18:52:44.313034  593872 cli_runner.go:164] Run: docker network inspect addons-060912 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:52:44.329327  593872 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 18:52:44.332861  593872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:52:44.343516  593872 kubeadm.go:883] updating cluster {Name:addons-060912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:52:44.343644  593872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:52:44.343708  593872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:52:44.419323  593872 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:52:44.419350  593872 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:52:44.419407  593872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:52:44.460038  593872 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:52:44.460063  593872 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:52:44.460072  593872 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0920 18:52:44.460202  593872 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-060912 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:52:44.460306  593872 ssh_runner.go:195] Run: crio config
	I0920 18:52:44.514388  593872 cni.go:84] Creating CNI manager for ""
	I0920 18:52:44.514413  593872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:52:44.514425  593872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:52:44.514455  593872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-060912 NodeName:addons-060912 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:52:44.514692  593872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-060912"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:52:44.514779  593872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:52:44.524006  593872 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:52:44.524086  593872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:52:44.532920  593872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0920 18:52:44.550839  593872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:52:44.569315  593872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0920 18:52:44.588095  593872 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 18:52:44.591834  593872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:52:44.603202  593872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:52:44.683106  593872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:52:44.698119  593872 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912 for IP: 192.168.49.2
	I0920 18:52:44.698180  593872 certs.go:194] generating shared ca certs ...
	I0920 18:52:44.698214  593872 certs.go:226] acquiring lock for ca certs: {Name:mk7eb18302258cdace745a9485ebacfefa55b617 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:44.698372  593872 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-586329/.minikube/ca.key
	I0920 18:52:45.773992  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/ca.crt ...
	I0920 18:52:45.774024  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/ca.crt: {Name:mk69bb3c03ec081974b98f7c83bdeca9a6b769c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:45.774223  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/ca.key ...
	I0920 18:52:45.774236  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/ca.key: {Name:mkb28aa16c08ff68a5c63f20cf7a4bc238a65fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:45.774329  593872 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.key
	I0920 18:52:46.306094  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.crt ...
	I0920 18:52:46.306172  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.crt: {Name:mk13a902be7ee771aaabf84d4d3b54c93512ec07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.306433  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.key ...
	I0920 18:52:46.306468  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.key: {Name:mk1a89b4cc2e765480e21d5ef942bf06a139d088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.307202  593872 certs.go:256] generating profile certs ...
	I0920 18:52:46.307348  593872 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.key
	I0920 18:52:46.307374  593872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt with IP's: []
	I0920 18:52:46.605180  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt ...
	I0920 18:52:46.605217  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: {Name:mk8ec6a9f7340d97847cfc91d6f9300f0c6bcb28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.605895  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.key ...
	I0920 18:52:46.605916  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.key: {Name:mk386836124c30368ae858b7208f9c6a723630c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.606065  593872 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key.2a5409c2
	I0920 18:52:46.606089  593872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt.2a5409c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 18:52:46.979328  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt.2a5409c2 ...
	I0920 18:52:46.979362  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt.2a5409c2: {Name:mk3de371d8cb695b97e343d91e61d450c7d1fceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.980031  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key.2a5409c2 ...
	I0920 18:52:46.980049  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key.2a5409c2: {Name:mk9c2eba1553b51025132aa06ce9c8b0e76efbd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.980539  593872 certs.go:381] copying /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt.2a5409c2 -> /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt
	I0920 18:52:46.980627  593872 certs.go:385] copying /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key.2a5409c2 -> /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key
	I0920 18:52:46.980686  593872 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.key
	I0920 18:52:46.980709  593872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.crt with IP's: []
	I0920 18:52:47.324830  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.crt ...
	I0920 18:52:47.324865  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.crt: {Name:mk4ae1dd5d3ae6c97cd47828e57b9a54fe850ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:47.325050  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.key ...
	I0920 18:52:47.325068  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.key: {Name:mkc15d867a2714a19ac6e38280d1d8789074dcb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:47.325295  593872 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:52:47.325345  593872 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:52:47.325375  593872 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:52:47.325407  593872 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/key.pem (1679 bytes)
	I0920 18:52:47.326508  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:52:47.355471  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 18:52:47.380228  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:52:47.404994  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:52:47.431136  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:52:47.456460  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:52:47.482481  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:52:47.506787  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:52:47.530822  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:52:47.555789  593872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:52:47.573794  593872 ssh_runner.go:195] Run: openssl version
	I0920 18:52:47.579677  593872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:52:47.589418  593872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:47.593050  593872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:52 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:47.593170  593872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:47.600533  593872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:52:47.610126  593872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:52:47.613505  593872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:52:47.613554  593872 kubeadm.go:392] StartCluster: {Name:addons-060912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:52:47.613633  593872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:52:47.613691  593872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:52:47.655005  593872 cri.go:89] found id: ""
	I0920 18:52:47.655106  593872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:52:47.664307  593872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:52:47.673271  593872 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 18:52:47.673378  593872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:52:47.682354  593872 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:52:47.682377  593872 kubeadm.go:157] found existing configuration files:
	
	I0920 18:52:47.682450  593872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:52:47.692197  593872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:52:47.692269  593872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:52:47.701005  593872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:52:47.709846  593872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:52:47.709939  593872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:52:47.718667  593872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:52:47.727606  593872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:52:47.727692  593872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:52:47.736256  593872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:52:47.745178  593872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:52:47.745277  593872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:52:47.753885  593872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 18:52:47.794524  593872 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:52:47.794742  593872 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:52:47.830867  593872 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 18:52:47.831080  593872 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 18:52:47.831147  593872 kubeadm.go:310] OS: Linux
	I0920 18:52:47.831230  593872 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 18:52:47.831314  593872 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 18:52:47.831391  593872 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 18:52:47.831469  593872 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 18:52:47.831550  593872 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 18:52:47.831627  593872 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 18:52:47.831704  593872 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 18:52:47.831782  593872 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 18:52:47.831867  593872 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 18:52:47.892879  593872 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:52:47.893045  593872 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:52:47.893173  593872 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:52:47.900100  593872 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:52:47.904767  593872 out.go:235]   - Generating certificates and keys ...
	I0920 18:52:47.904883  593872 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:52:47.904967  593872 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:52:48.301483  593872 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:52:48.505712  593872 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:52:48.627729  593872 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:52:49.408566  593872 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:52:49.585470  593872 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:52:49.585855  593872 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-060912 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:52:50.403787  593872 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:52:50.404133  593872 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-060912 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:52:50.541148  593872 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:52:50.956925  593872 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:52:51.982371  593872 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:52:51.982653  593872 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:52:52.374506  593872 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:52:52.684664  593872 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:52:53.299054  593872 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:52:53.724444  593872 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:52:54.066667  593872 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:52:54.067475  593872 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:52:54.070541  593872 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:52:54.072885  593872 out.go:235]   - Booting up control plane ...
	I0920 18:52:54.072994  593872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:52:54.073071  593872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:52:54.073988  593872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:52:54.087870  593872 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:52:54.094550  593872 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:52:54.094874  593872 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:52:54.193678  593872 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:52:54.193802  593872 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:52:55.195346  593872 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001638684s
	I0920 18:52:55.195439  593872 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:53:00.697120  593872 kubeadm.go:310] [api-check] The API server is healthy after 5.501870038s
	I0920 18:53:00.728997  593872 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:53:00.750818  593872 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:53:00.777564  593872 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:53:00.777765  593872 kubeadm.go:310] [mark-control-plane] Marking the node addons-060912 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:53:00.791626  593872 kubeadm.go:310] [bootstrap-token] Using token: 3mukj1.5gr6p80qxuq1esbm
	I0920 18:53:00.793695  593872 out.go:235]   - Configuring RBAC rules ...
	I0920 18:53:00.793825  593872 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:53:00.798878  593872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:53:00.806432  593872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:53:00.810066  593872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:53:00.815042  593872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:53:00.818568  593872 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:53:01.105603  593872 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:53:01.548911  593872 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:53:02.106478  593872 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:53:02.106507  593872 kubeadm.go:310] 
	I0920 18:53:02.106578  593872 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:53:02.106584  593872 kubeadm.go:310] 
	I0920 18:53:02.106721  593872 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:53:02.106734  593872 kubeadm.go:310] 
	I0920 18:53:02.106772  593872 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:53:02.106834  593872 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:53:02.106884  593872 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:53:02.106888  593872 kubeadm.go:310] 
	I0920 18:53:02.106941  593872 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:53:02.106945  593872 kubeadm.go:310] 
	I0920 18:53:02.106992  593872 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:53:02.106997  593872 kubeadm.go:310] 
	I0920 18:53:02.107062  593872 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:53:02.107137  593872 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:53:02.107203  593872 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:53:02.107208  593872 kubeadm.go:310] 
	I0920 18:53:02.107290  593872 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:53:02.107368  593872 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:53:02.107373  593872 kubeadm.go:310] 
	I0920 18:53:02.107455  593872 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3mukj1.5gr6p80qxuq1esbm \
	I0920 18:53:02.107556  593872 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee5188aaaabb34e982a2e59e30a557aaa604ab6ab39002e0379fe9f0994613c \
	I0920 18:53:02.107576  593872 kubeadm.go:310] 	--control-plane 
	I0920 18:53:02.107579  593872 kubeadm.go:310] 
	I0920 18:53:02.107664  593872 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:53:02.107668  593872 kubeadm.go:310] 
	I0920 18:53:02.107748  593872 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3mukj1.5gr6p80qxuq1esbm \
	I0920 18:53:02.107850  593872 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee5188aaaabb34e982a2e59e30a557aaa604ab6ab39002e0379fe9f0994613c 
	I0920 18:53:02.110386  593872 kubeadm.go:310] W0920 18:52:47.790907    1181 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:53:02.110692  593872 kubeadm.go:310] W0920 18:52:47.791995    1181 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:53:02.110919  593872 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0920 18:53:02.111098  593872 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:53:02.111123  593872 cni.go:84] Creating CNI manager for ""
	I0920 18:53:02.111136  593872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:53:02.113349  593872 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 18:53:02.115174  593872 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 18:53:02.119312  593872 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 18:53:02.119335  593872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 18:53:02.142105  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 18:53:02.431658  593872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:53:02.431817  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:02.431901  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-060912 minikube.k8s.io/updated_at=2024_09_20T18_53_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=addons-060912 minikube.k8s.io/primary=true
	I0920 18:53:02.446105  593872 ops.go:34] apiserver oom_adj: -16
	I0920 18:53:02.565999  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:03.066570  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:03.566114  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:04.066703  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:04.566700  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:05.066202  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:05.566810  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:06.066185  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:06.566942  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:06.656897  593872 kubeadm.go:1113] duration metric: took 4.225127214s to wait for elevateKubeSystemPrivileges
	I0920 18:53:06.656923  593872 kubeadm.go:394] duration metric: took 19.04337458s to StartCluster
	I0920 18:53:06.656941  593872 settings.go:142] acquiring lock: {Name:mk20a33ee294fe7ee1acfd59cbfa4fb0357cdddf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:53:06.657086  593872 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-586329/kubeconfig
	I0920 18:53:06.657504  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/kubeconfig: {Name:mke1c46b803a8499b182d8427df0204efbd97826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:53:06.658369  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:53:06.658394  593872 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:53:06.658659  593872 config.go:182] Loaded profile config "addons-060912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:06.658701  593872 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 18:53:06.658785  593872 addons.go:69] Setting yakd=true in profile "addons-060912"
	I0920 18:53:06.658801  593872 addons.go:234] Setting addon yakd=true in "addons-060912"
	I0920 18:53:06.658825  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.659339  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.659588  593872 addons.go:69] Setting inspektor-gadget=true in profile "addons-060912"
	I0920 18:53:06.659613  593872 addons.go:234] Setting addon inspektor-gadget=true in "addons-060912"
	I0920 18:53:06.659639  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.660068  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.660634  593872 addons.go:69] Setting cloud-spanner=true in profile "addons-060912"
	I0920 18:53:06.660658  593872 addons.go:234] Setting addon cloud-spanner=true in "addons-060912"
	I0920 18:53:06.660694  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.661122  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.664101  593872 addons.go:69] Setting metrics-server=true in profile "addons-060912"
	I0920 18:53:06.664174  593872 addons.go:234] Setting addon metrics-server=true in "addons-060912"
	I0920 18:53:06.664225  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.664719  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.667132  593872 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-060912"
	I0920 18:53:06.667206  593872 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-060912"
	I0920 18:53:06.667241  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.667711  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.680289  593872 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-060912"
	I0920 18:53:06.680324  593872 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-060912"
	I0920 18:53:06.680367  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.680373  593872 addons.go:69] Setting default-storageclass=true in profile "addons-060912"
	I0920 18:53:06.680394  593872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-060912"
	I0920 18:53:06.680712  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.680844  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.691123  593872 addons.go:69] Setting registry=true in profile "addons-060912"
	I0920 18:53:06.691155  593872 addons.go:234] Setting addon registry=true in "addons-060912"
	I0920 18:53:06.691192  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.691227  593872 addons.go:69] Setting gcp-auth=true in profile "addons-060912"
	I0920 18:53:06.691250  593872 mustload.go:65] Loading cluster: addons-060912
	I0920 18:53:06.691423  593872 config.go:182] Loaded profile config "addons-060912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:06.691663  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.691671  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.711088  593872 addons.go:69] Setting storage-provisioner=true in profile "addons-060912"
	I0920 18:53:06.711123  593872 addons.go:234] Setting addon storage-provisioner=true in "addons-060912"
	I0920 18:53:06.711160  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.711632  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.711881  593872 addons.go:69] Setting ingress=true in profile "addons-060912"
	I0920 18:53:06.711898  593872 addons.go:234] Setting addon ingress=true in "addons-060912"
	I0920 18:53:06.711935  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.712343  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.723158  593872 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-060912"
	I0920 18:53:06.723179  593872 addons.go:69] Setting ingress-dns=true in profile "addons-060912"
	I0920 18:53:06.723193  593872 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-060912"
	I0920 18:53:06.723201  593872 addons.go:234] Setting addon ingress-dns=true in "addons-060912"
	I0920 18:53:06.723253  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.723525  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.723678  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.728597  593872 addons.go:69] Setting volcano=true in profile "addons-060912"
	I0920 18:53:06.728639  593872 addons.go:234] Setting addon volcano=true in "addons-060912"
	I0920 18:53:06.728679  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.729150  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.748969  593872 out.go:177] * Verifying Kubernetes components...
	I0920 18:53:06.760944  593872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:53:06.763463  593872 addons.go:69] Setting volumesnapshots=true in profile "addons-060912"
	I0920 18:53:06.763497  593872 addons.go:234] Setting addon volumesnapshots=true in "addons-060912"
	I0920 18:53:06.763545  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.764046  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.800789  593872 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 18:53:06.802865  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 18:53:06.803037  593872 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 18:53:06.803166  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:06.821921  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 18:53:06.824580  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 18:53:06.824689  593872 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 18:53:06.827502  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 18:53:06.830370  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 18:53:06.832993  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 18:53:06.836463  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 18:53:06.889558  593872 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0920 18:53:06.890713  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:53:06.893342  593872 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 18:53:06.893363  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 18:53:06.893428  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:06.919558  593872 addons.go:234] Setting addon default-storageclass=true in "addons-060912"
	I0920 18:53:06.919598  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.923397  593872 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 18:53:06.924653  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.928129  593872 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-060912"
	I0920 18:53:06.936192  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.936680  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.949702  593872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:53:06.931219  593872 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:53:06.949911  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 18:53:06.949984  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:06.950158  593872 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:53:06.953013  593872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:53:06.950351  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 18:53:06.931395  593872 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 18:53:06.955338  593872 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 18:53:06.955415  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	W0920 18:53:06.950474  593872 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 18:53:06.962421  593872 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 18:53:06.962836  593872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:53:06.962853  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:53:06.962918  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:06.950358  593872 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 18:53:06.964337  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:06.979232  593872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 18:53:06.984485  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 18:53:06.979317  593872 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:53:07.001978  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 18:53:07.002102  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.019211  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:07.021298  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.038210  593872 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:53:07.038235  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 18:53:07.038313  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.038929  593872 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 18:53:07.039087  593872 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:53:07.039119  593872 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:53:07.039189  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.058038  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 18:53:07.062112  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 18:53:07.062141  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 18:53:07.062211  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.064857  593872 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 18:53:07.070082  593872 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 18:53:07.070106  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 18:53:07.070177  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.078060  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 18:53:07.078085  593872 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 18:53:07.078158  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.095923  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.105890  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.131430  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.134781  593872 out.go:177]   - Using image docker.io/busybox:stable
	I0920 18:53:07.136896  593872 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 18:53:07.138991  593872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:53:07.139091  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 18:53:07.139160  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.171105  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.173369  593872 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:53:07.173394  593872 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:53:07.173464  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.203565  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.227872  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.240177  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.254474  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.256680  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.271504  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.296820  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.535433  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 18:53:07.535502  593872 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 18:53:07.580640  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:53:07.609002  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:53:07.636963  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 18:53:07.644507  593872 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 18:53:07.644575  593872 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 18:53:07.650358  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:53:07.748416  593872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:53:07.760493  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 18:53:07.760561  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 18:53:07.769526  593872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:53:07.769590  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 18:53:07.772733  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 18:53:07.772813  593872 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 18:53:07.776888  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:53:07.792143  593872 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 18:53:07.792217  593872 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 18:53:07.799482  593872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 18:53:07.799550  593872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 18:53:07.823809  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:53:07.841182  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:53:07.875970  593872 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 18:53:07.876048  593872 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 18:53:07.940871  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 18:53:07.940939  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 18:53:07.944185  593872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:53:07.944261  593872 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:53:07.969102  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 18:53:07.969176  593872 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 18:53:08.008762  593872 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:53:08.008849  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 18:53:08.024429  593872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 18:53:08.024500  593872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 18:53:08.101275  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 18:53:08.101369  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 18:53:08.104605  593872 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 18:53:08.104668  593872 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 18:53:08.125142  593872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:53:08.125223  593872 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:53:08.153003  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:53:08.153081  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 18:53:08.192290  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:53:08.213189  593872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 18:53:08.213258  593872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 18:53:08.237465  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 18:53:08.237541  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 18:53:08.266264  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:53:08.273091  593872 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 18:53:08.273160  593872 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 18:53:08.295064  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:53:08.337669  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 18:53:08.337737  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 18:53:08.361168  593872 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 18:53:08.361244  593872 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 18:53:08.381395  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 18:53:08.381462  593872 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 18:53:08.434137  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 18:53:08.434209  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 18:53:08.476175  593872 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 18:53:08.476243  593872 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 18:53:08.523767  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 18:53:08.523881  593872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 18:53:08.546597  593872 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:53:08.546686  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 18:53:08.570312  593872 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:53:08.570338  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 18:53:08.601983  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 18:53:08.602012  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 18:53:08.688455  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:53:08.767479  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:53:08.771253  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 18:53:08.771280  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 18:53:08.909646  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:53:08.909674  593872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 18:53:09.083120  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:53:09.366397  593872 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.475641839s)
	I0920 18:53:09.366431  593872 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 18:53:10.656771  593872 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-060912" context rescaled to 1 replicas
	I0920 18:53:12.612519  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.031804896s)
	I0920 18:53:12.612722  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.924224384s)
	I0920 18:53:12.612735  593872 addons.go:475] Verifying addon ingress=true in "addons-060912"
	I0920 18:53:12.612613  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.975577954s)
	I0920 18:53:12.612622  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.96220398s)
	I0920 18:53:12.612632  593872 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.864142489s)
	I0920 18:53:12.613848  593872 node_ready.go:35] waiting up to 6m0s for node "addons-060912" to be "Ready" ...
	I0920 18:53:12.612640  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.835693426s)
	I0920 18:53:12.612649  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.788768719s)
	I0920 18:53:12.612677  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.771429754s)
	I0920 18:53:12.612686  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.420319709s)
	I0920 18:53:12.614217  593872 addons.go:475] Verifying addon registry=true in "addons-060912"
	I0920 18:53:12.612700  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.346358677s)
	I0920 18:53:12.612710  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.317571088s)
	I0920 18:53:12.612603  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.003520156s)
	I0920 18:53:12.614629  593872 addons.go:475] Verifying addon metrics-server=true in "addons-060912"
	I0920 18:53:12.615210  593872 out.go:177] * Verifying ingress addon...
	I0920 18:53:12.616733  593872 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-060912 service yakd-dashboard -n yakd-dashboard
	
	I0920 18:53:12.616811  593872 out.go:177] * Verifying registry addon...
	I0920 18:53:12.618820  593872 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 18:53:12.621444  593872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 18:53:12.657275  593872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:53:12.657379  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:12.659859  593872 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 18:53:12.659933  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0920 18:53:12.681416  593872 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 18:53:12.767378  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.999849337s)
	W0920 18:53:12.767492  593872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:53:12.767543  593872 retry.go:31] will retry after 146.594076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:53:12.914870  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:53:13.020525  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.937354531s)
	I0920 18:53:13.020615  593872 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-060912"
	I0920 18:53:13.023641  593872 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 18:53:13.026354  593872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 18:53:13.059604  593872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:53:13.059630  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:13.157073  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:13.158584  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:13.530916  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:13.623039  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:13.625710  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:14.031109  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:14.132801  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:14.133257  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:14.536672  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:14.617333  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:14.625172  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:14.626190  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:15.037784  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:15.139083  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:15.140081  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:15.530807  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:15.625424  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:15.627060  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:15.811713  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.896729674s)
	I0920 18:53:16.030500  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:16.131340  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:16.132189  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:16.245904  593872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 18:53:16.246064  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:16.271250  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:16.400062  593872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 18:53:16.420647  593872 addons.go:234] Setting addon gcp-auth=true in "addons-060912"
	I0920 18:53:16.420709  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:16.421221  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:16.453077  593872 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 18:53:16.453133  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:16.472122  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:16.530200  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:16.594150  593872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:53:16.595930  593872 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 18:53:16.598033  593872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 18:53:16.598096  593872 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 18:53:16.617576  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:16.623818  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:16.627395  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:16.654047  593872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 18:53:16.654122  593872 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 18:53:16.675521  593872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:53:16.675615  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 18:53:16.696169  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:53:17.030750  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:17.123475  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:17.129949  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:17.398772  593872 addons.go:475] Verifying addon gcp-auth=true in "addons-060912"
	I0920 18:53:17.400714  593872 out.go:177] * Verifying gcp-auth addon...
	I0920 18:53:17.403254  593872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 18:53:17.424327  593872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 18:53:17.424348  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:17.530789  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:17.622216  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:17.624897  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:17.908276  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:18.032558  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:18.123585  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:18.125296  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:18.409068  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:18.535824  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:18.619988  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:18.631248  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:18.632258  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:18.906952  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:19.031275  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:19.123974  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:19.125329  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:19.407041  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:19.530437  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:19.623209  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:19.626778  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:19.907358  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:20.031410  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:20.124451  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:20.127885  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:20.408163  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:20.530304  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:20.624641  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:20.627521  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:20.640571  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:20.906590  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:21.030716  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:21.123396  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:21.125966  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:21.407311  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:21.530248  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:21.632768  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:21.633772  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:21.907461  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:22.030655  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:22.122491  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:22.124032  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:22.407518  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:22.529985  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:22.627170  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:22.627503  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:22.906531  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:23.030537  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:23.117862  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:23.122924  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:23.124460  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:23.406221  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:23.530623  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:23.622743  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:23.624225  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:23.906508  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:24.030964  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:24.123359  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:24.124718  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:24.406947  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:24.530397  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:24.622656  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:24.625089  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:24.906346  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:25.030863  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:25.123281  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:25.124762  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:25.406740  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:25.529934  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:25.618285  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:25.623223  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:25.625275  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:25.907343  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:26.029874  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:26.122880  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:26.125558  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:26.406428  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:26.529876  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:26.622569  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:26.624263  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:26.907876  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:27.030892  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:27.122763  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:27.125752  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:27.407385  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:27.529890  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:27.623664  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:27.625235  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:27.906932  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:28.031546  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:28.117142  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:28.124003  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:28.125268  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:28.407350  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:28.530316  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:28.622496  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:28.625367  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:28.906550  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:29.030563  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:29.123518  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:29.125027  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:29.406272  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:29.530318  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:29.623487  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:29.626002  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:29.907054  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:30.034955  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:30.117886  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:30.123770  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:30.130136  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:30.406799  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:30.530068  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:30.623431  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:30.625815  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:30.906721  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:31.030766  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:31.122470  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:31.125131  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:31.406466  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:31.530019  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:31.623357  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:31.625132  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:31.906429  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:32.030130  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:32.122724  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:32.125475  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:32.406950  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:32.530073  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:32.617262  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:32.623567  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:32.624606  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:32.906774  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:33.030618  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:33.122463  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:33.124352  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:33.406566  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:33.529976  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:33.623194  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:33.625569  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:33.906893  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:34.030568  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:34.124214  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:34.125664  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:34.406789  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:34.530093  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:34.617503  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:34.622267  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:34.624640  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:34.906716  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:35.030814  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:35.122711  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:35.124484  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:35.406906  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:35.530090  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:35.628989  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:35.643448  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:35.907749  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:36.033222  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:36.123115  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:36.125074  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:36.406478  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:36.530372  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:36.617537  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:36.623124  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:36.624707  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:36.907705  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:37.032609  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:37.122402  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:37.124436  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:37.410158  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:37.530964  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:37.623290  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:37.624638  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:37.908427  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:38.032501  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:38.123432  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:38.125097  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:38.407006  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:38.531090  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:38.617637  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:38.623531  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:38.624757  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:38.907994  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:39.030429  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:39.122831  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:39.125472  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:39.406900  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:39.530683  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:39.622682  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:39.625408  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:39.906436  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:40.032512  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:40.122833  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:40.125873  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:40.407481  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:40.530433  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:40.623305  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:40.625601  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:40.907104  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:41.030489  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:41.117535  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:41.123596  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:41.125740  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:41.408742  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:41.530414  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:41.623195  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:41.624942  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:41.906278  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:42.030219  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:42.124663  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:42.126897  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:42.406451  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:42.529842  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:42.623685  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:42.624861  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:42.907530  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:43.030270  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:43.118084  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:43.122827  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:43.124122  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:43.406531  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:43.530126  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:43.623043  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:43.624496  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:43.906195  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:44.030547  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:44.123616  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:44.124870  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:44.407296  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:44.530337  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:44.623165  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:44.624362  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:44.906714  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:45.030883  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:45.127738  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:45.130246  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:45.407317  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:45.530858  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:45.617283  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:45.623572  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:45.626121  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:45.907349  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:46.029896  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:46.122967  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:46.124510  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:46.406878  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:46.529797  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:46.623467  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:46.625901  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:46.907092  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:47.029591  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:47.123043  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:47.124598  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:47.407203  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:47.530170  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:47.617840  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:47.622988  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:47.625052  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:47.906284  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:48.030282  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:48.123543  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:48.125837  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:48.407567  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:48.530061  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:48.622624  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:48.624101  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:48.906319  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:49.029685  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:49.123390  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:49.125759  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:49.406675  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:49.530424  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:49.623059  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:49.624375  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:49.914789  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:50.073315  593872 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:53:50.073346  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:50.177542  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:50.178097  593872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:53:50.178119  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:50.178911  593872 node_ready.go:49] node "addons-060912" has status "Ready":"True"
	I0920 18:53:50.178932  593872 node_ready.go:38] duration metric: took 37.565064524s for node "addons-060912" to be "Ready" ...
	I0920 18:53:50.178943  593872 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:53:50.209529  593872 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cl27s" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:50.412871  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:50.534356  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:50.633995  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:50.635103  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:50.926509  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:51.040298  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:51.123773  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:51.127158  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:51.407040  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:51.532755  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:51.632804  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:51.634087  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:51.932590  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:52.032350  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:52.124423  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:52.129016  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:52.216901  593872 pod_ready.go:93] pod "coredns-7c65d6cfc9-cl27s" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.216927  593872 pod_ready.go:82] duration metric: took 2.007357992s for pod "coredns-7c65d6cfc9-cl27s" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.216954  593872 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.227598  593872 pod_ready.go:93] pod "etcd-addons-060912" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.227626  593872 pod_ready.go:82] duration metric: took 10.663807ms for pod "etcd-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.227642  593872 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.233476  593872 pod_ready.go:93] pod "kube-apiserver-addons-060912" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.233503  593872 pod_ready.go:82] duration metric: took 5.853067ms for pod "kube-apiserver-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.233518  593872 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.239607  593872 pod_ready.go:93] pod "kube-controller-manager-addons-060912" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.239631  593872 pod_ready.go:82] duration metric: took 6.104882ms for pod "kube-controller-manager-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.239646  593872 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c522g" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.245402  593872 pod_ready.go:93] pod "kube-proxy-c522g" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.245429  593872 pod_ready.go:82] duration metric: took 5.77497ms for pod "kube-proxy-c522g" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.245442  593872 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.407590  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:52.532029  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:52.614340  593872 pod_ready.go:93] pod "kube-scheduler-addons-060912" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.614364  593872 pod_ready.go:82] duration metric: took 368.914093ms for pod "kube-scheduler-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.614376  593872 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.628872  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:52.630785  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:52.907684  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:53.032348  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:53.123194  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:53.125921  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:53.407116  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:53.531905  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:53.632352  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:53.633355  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:53.907223  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:54.031444  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:54.123369  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:54.125452  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:54.406161  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:54.531797  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:54.621522  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:53:54.625378  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:54.626368  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:54.908405  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:55.033311  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:55.129235  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:55.136020  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:55.407641  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:55.532754  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:55.629988  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:55.630688  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:55.908033  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:56.032956  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:56.131881  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:56.135239  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:56.407252  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:56.532357  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:56.634496  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:56.634762  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:56.675444  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:53:56.907632  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:57.032216  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:57.124438  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:57.129823  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:57.407619  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:57.531487  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:57.629839  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:57.629800  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:57.908525  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:58.032421  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:58.127448  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:58.128931  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:58.406845  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:58.532378  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:58.629947  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:58.637452  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:58.907318  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:59.038238  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:59.123250  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:53:59.123692  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:59.124754  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:59.407399  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:59.531831  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:59.625264  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:59.627267  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:59.907619  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:00.040374  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:00.143816  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:00.162956  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:00.414806  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:00.535472  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:00.638003  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:00.654461  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:00.906727  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:01.033760  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:01.122760  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:01.127364  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:01.407190  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:01.531912  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:01.620649  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:01.623645  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:01.625824  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:01.907353  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:02.031678  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:02.127559  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:02.136258  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:02.407697  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:02.532649  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:02.624581  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:02.626864  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:02.907243  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:03.031855  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:03.124370  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:03.125997  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:03.406572  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:03.531774  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:03.622198  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:03.623803  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:03.626096  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:03.906862  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:04.034339  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:04.123394  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:04.125057  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:04.406456  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:04.531188  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:04.624236  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:04.625251  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:04.907437  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:05.034015  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:05.136328  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:05.140535  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:05.407750  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:05.531688  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:05.622928  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:05.625977  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:05.628216  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:05.907392  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:06.035534  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:06.137035  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:06.140689  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:06.407645  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:06.532556  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:06.631129  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:06.637720  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:06.907369  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:07.033152  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:07.131047  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:07.132304  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:07.407831  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:07.534227  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:07.628560  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:07.630035  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:07.908146  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:08.046766  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:08.128098  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:08.146618  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:08.148909  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:08.406526  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:08.531145  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:08.622943  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:08.625835  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:08.907156  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:09.032047  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:09.125239  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:09.127763  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:09.406510  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:09.535708  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:09.627079  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:09.628475  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:09.908539  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:10.032103  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:10.128287  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:10.130963  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:10.134006  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:10.408230  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:10.537067  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:10.635350  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:10.636775  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:10.909280  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:11.031922  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:11.141473  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:11.143400  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:11.409121  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:11.533393  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:11.624605  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:11.626270  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:11.908376  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:12.033643  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:12.134194  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:12.135720  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:12.139063  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:12.408488  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:12.533149  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:12.625412  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:12.628837  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:12.908197  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:13.039091  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:13.148688  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:13.150626  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:13.407231  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:13.538129  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:13.633910  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:13.634284  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:13.907146  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:14.031963  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:14.138068  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:14.139837  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:14.406746  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:14.532196  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:14.621320  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:14.623936  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:14.625755  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:14.915462  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:15.039044  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:15.151579  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:15.154892  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:15.407061  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:15.532461  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:15.623936  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:15.631335  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:15.907583  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:16.031676  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:16.132063  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:16.132159  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:16.407214  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:16.531877  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:16.622218  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:16.624731  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:16.627246  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:16.907112  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:17.031940  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:17.123879  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:17.125637  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:17.407684  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:17.531652  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:17.623237  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:17.624952  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:17.907148  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:18.032472  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:18.124997  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:18.128424  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:18.408608  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:18.533769  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:18.622613  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:18.625361  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:18.626956  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:18.907688  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:19.032365  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:19.126642  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:19.128047  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:19.407217  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:19.532128  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:19.625119  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:19.637814  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:19.908672  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:20.032425  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:20.134537  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:20.138948  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:20.407812  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:20.531792  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:20.623610  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:20.625524  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:20.907557  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:21.032364  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:21.122717  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:21.125484  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:21.128074  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:21.408400  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:21.532268  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:21.626506  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:21.627950  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:21.907994  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:22.032143  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:22.125563  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:22.128530  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:22.407181  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:22.531457  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:22.627992  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:22.630782  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:22.908478  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:23.033385  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:23.150200  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:23.158628  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:23.175368  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:23.407473  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:23.562996  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:23.633408  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:23.649343  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:23.908349  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:24.051429  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:24.130915  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:24.133182  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:24.407528  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:24.534113  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:24.624993  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:24.625264  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:24.906353  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:25.031654  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:25.125434  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:25.125905  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:25.407969  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:25.532605  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:25.630968  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:25.632259  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:25.636454  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:25.907689  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:26.036437  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:26.130115  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:26.132311  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:26.407831  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:26.532065  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:26.632353  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:26.635310  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:26.907144  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:27.031562  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:27.126473  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:27.129258  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:27.407457  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:27.534341  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:27.628767  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:27.630758  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:27.906634  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:28.032860  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:28.133735  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:28.135066  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:28.139879  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:28.407295  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:28.530907  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:28.623822  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:28.625241  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:28.908410  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:29.032110  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:29.123334  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:29.125825  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:29.408334  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:29.531931  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:29.635718  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:29.637010  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:29.907747  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:30.032207  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:30.125423  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:30.129378  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:30.429404  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:30.531075  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:30.623099  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:30.623506  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:30.626472  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:30.907454  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:31.031130  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:31.122882  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:31.125933  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:31.409131  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:31.536612  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:31.637346  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:31.637939  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:31.909737  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:32.033045  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:32.124243  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:32.125947  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:32.415304  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:32.531436  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:32.623785  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:32.628047  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:32.906594  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:33.032489  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:33.121437  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:33.123577  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:33.126223  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:33.407727  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:33.544797  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:33.639503  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:33.639917  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:33.908501  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:34.042595  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:34.129639  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:34.143896  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:34.408132  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:34.532017  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:34.625507  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:34.625737  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:34.908204  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:35.031539  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:35.122869  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:35.125949  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:35.407190  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:35.531204  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:35.621395  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:35.622954  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:35.627663  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:35.907161  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:36.031668  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:36.123711  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:36.125683  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:36.406719  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:36.531817  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:36.624197  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:36.625728  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:36.906856  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:37.039202  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:37.132125  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:37.139533  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:37.407783  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:37.532343  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:37.627594  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:37.629986  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:37.635302  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:37.907649  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:38.036353  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:38.129070  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:38.141087  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:38.406532  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:38.532632  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:38.629637  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:38.631244  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:38.907315  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:39.032138  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:39.144645  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:39.146049  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:39.410944  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:39.532413  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:39.626005  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:39.634918  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:39.907693  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:40.048331  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:40.135999  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:40.145681  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:40.147455  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:40.420292  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:40.532803  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:40.632950  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:40.633952  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:40.907773  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:41.031222  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:41.124458  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:41.126401  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:41.407738  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:41.541065  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:41.627392  593872 kapi.go:107] duration metric: took 1m29.005936692s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 18:54:41.627849  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:41.907865  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:42.031882  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:42.128072  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:42.136230  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:42.408023  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:42.535450  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:42.628719  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:42.907633  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:43.037631  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:43.126577  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:43.408257  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:43.532830  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:43.622674  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:43.906383  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:44.032885  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:44.130566  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:44.412103  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:44.531674  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:44.624493  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:44.625354  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:44.907163  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:45.041059  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:45.146171  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:45.409090  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:45.538109  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:45.625064  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:45.906825  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:46.032091  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:46.126749  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:46.408000  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:46.532548  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:46.625356  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:46.906911  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:47.032115  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:47.126363  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:47.126613  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:47.408116  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:47.537259  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:47.631788  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:47.906519  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:48.032902  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:48.124487  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:48.407049  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:48.531900  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:48.623826  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:48.907268  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:49.032168  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:49.124770  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:49.407794  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:49.532588  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:49.621573  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:49.624122  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:49.907138  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:50.031140  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:50.125351  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:50.407077  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:50.531766  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:50.623649  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:50.906983  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:51.031224  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:51.133790  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:51.407000  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:51.532440  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:51.621696  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:51.629042  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:51.910023  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:52.034638  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:52.131483  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:52.407512  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:52.531145  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:52.624175  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:52.906583  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:53.031793  593872 kapi.go:107] duration metric: took 1m40.005442028s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 18:54:53.123528  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:53.407310  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:53.621853  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:53.624084  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:53.907521  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:54.125743  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:54.406985  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:54.624565  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:54.907009  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:55.123603  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:55.414564  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:55.633000  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:55.634997  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:55.907186  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:56.129208  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:56.409356  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:56.626668  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:56.907820  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:57.127443  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:57.407400  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:57.633052  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:57.636316  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:57.907158  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:58.124639  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:58.408153  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:58.628065  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:58.906454  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:59.137305  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:59.409662  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:59.625505  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:59.908145  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:00.234020  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:55:00.240334  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:00.412169  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:00.638990  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:55:00.907738  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:01.137609  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:55:01.408120  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:01.625029  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:55:01.908497  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:02.123863  593872 kapi.go:107] duration metric: took 1m49.505042131s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 18:55:02.407797  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:02.620522  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:02.908784  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:03.409791  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:03.906980  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:04.408196  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:04.628352  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:04.908939  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:05.407593  593872 kapi.go:107] duration metric: took 1m48.004337441s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 18:55:05.410378  593872 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-060912 cluster.
	I0920 18:55:05.412210  593872 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 18:55:05.414346  593872 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 18:55:05.416700  593872 out.go:177] * Enabled addons: inspektor-gadget, cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 18:55:05.419147  593872 addons.go:510] duration metric: took 1m58.760444537s for enable addons: enabled=[inspektor-gadget cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 18:55:07.120857  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:09.122235  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:11.123435  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:13.620855  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:15.621324  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:18.121934  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:20.620894  593872 pod_ready.go:93] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"True"
	I0920 18:55:20.620923  593872 pod_ready.go:82] duration metric: took 1m28.006539781s for pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace to be "Ready" ...
	I0920 18:55:20.620936  593872 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6c4pc" in "kube-system" namespace to be "Ready" ...
	I0920 18:55:20.626791  593872 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6c4pc" in "kube-system" namespace has status "Ready":"True"
	I0920 18:55:20.626827  593872 pod_ready.go:82] duration metric: took 5.883525ms for pod "nvidia-device-plugin-daemonset-6c4pc" in "kube-system" namespace to be "Ready" ...
	I0920 18:55:20.626855  593872 pod_ready.go:39] duration metric: took 1m30.447894207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:55:20.626873  593872 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:55:20.626917  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:55:20.627002  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:55:20.683602  593872 cri.go:89] found id: "8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:20.683673  593872 cri.go:89] found id: ""
	I0920 18:55:20.683688  593872 logs.go:276] 1 containers: [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef]
	I0920 18:55:20.683760  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.687980  593872 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:55:20.688058  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:55:20.725151  593872 cri.go:89] found id: "ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:20.725197  593872 cri.go:89] found id: ""
	I0920 18:55:20.725206  593872 logs.go:276] 1 containers: [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395]
	I0920 18:55:20.725263  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.728863  593872 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:55:20.728936  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:55:20.768741  593872 cri.go:89] found id: "1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:20.768764  593872 cri.go:89] found id: ""
	I0920 18:55:20.768772  593872 logs.go:276] 1 containers: [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f]
	I0920 18:55:20.768830  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.773058  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:55:20.773130  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:55:20.811084  593872 cri.go:89] found id: "0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:20.811108  593872 cri.go:89] found id: ""
	I0920 18:55:20.811117  593872 logs.go:276] 1 containers: [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f]
	I0920 18:55:20.811173  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.814706  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:55:20.814779  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:55:20.856300  593872 cri.go:89] found id: "6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:20.856326  593872 cri.go:89] found id: ""
	I0920 18:55:20.856334  593872 logs.go:276] 1 containers: [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5]
	I0920 18:55:20.856389  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.860484  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:55:20.860560  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:55:20.902306  593872 cri.go:89] found id: "4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:20.902329  593872 cri.go:89] found id: ""
	I0920 18:55:20.902347  593872 logs.go:276] 1 containers: [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831]
	I0920 18:55:20.902405  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.905966  593872 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:55:20.906048  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:55:20.949793  593872 cri.go:89] found id: "b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:20.949815  593872 cri.go:89] found id: ""
	I0920 18:55:20.949823  593872 logs.go:276] 1 containers: [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b]
	I0920 18:55:20.949881  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.953468  593872 logs.go:123] Gathering logs for dmesg ...
	I0920 18:55:20.953498  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:55:20.971004  593872 logs.go:123] Gathering logs for kube-apiserver [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef] ...
	I0920 18:55:20.971114  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:21.056388  593872 logs.go:123] Gathering logs for etcd [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395] ...
	I0920 18:55:21.056425  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:21.104981  593872 logs.go:123] Gathering logs for coredns [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f] ...
	I0920 18:55:21.105015  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:21.151277  593872 logs.go:123] Gathering logs for kube-controller-manager [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831] ...
	I0920 18:55:21.151308  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:21.229700  593872 logs.go:123] Gathering logs for kindnet [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b] ...
	I0920 18:55:21.229738  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:21.276985  593872 logs.go:123] Gathering logs for kubelet ...
	I0920 18:55:21.277013  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:55:21.366118  593872 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:55:21.366161  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:55:21.585779  593872 logs.go:123] Gathering logs for kube-scheduler [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f] ...
	I0920 18:55:21.585813  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:21.630226  593872 logs.go:123] Gathering logs for kube-proxy [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5] ...
	I0920 18:55:21.630253  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:21.675630  593872 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:55:21.675658  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:55:21.774311  593872 logs.go:123] Gathering logs for container status ...
	I0920 18:55:21.774353  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:55:24.342050  593872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:55:24.356377  593872 api_server.go:72] duration metric: took 2m17.697948817s to wait for apiserver process to appear ...
	I0920 18:55:24.356407  593872 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:55:24.356442  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:55:24.356512  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:55:24.396349  593872 cri.go:89] found id: "8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:24.396374  593872 cri.go:89] found id: ""
	I0920 18:55:24.396383  593872 logs.go:276] 1 containers: [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef]
	I0920 18:55:24.396440  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.400025  593872 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:55:24.400103  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:55:24.437632  593872 cri.go:89] found id: "ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:24.437656  593872 cri.go:89] found id: ""
	I0920 18:55:24.437665  593872 logs.go:276] 1 containers: [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395]
	I0920 18:55:24.437765  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.441226  593872 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:55:24.441310  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:55:24.480492  593872 cri.go:89] found id: "1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:24.480515  593872 cri.go:89] found id: ""
	I0920 18:55:24.480523  593872 logs.go:276] 1 containers: [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f]
	I0920 18:55:24.480588  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.484432  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:55:24.484514  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:55:24.534785  593872 cri.go:89] found id: "0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:24.534810  593872 cri.go:89] found id: ""
	I0920 18:55:24.534819  593872 logs.go:276] 1 containers: [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f]
	I0920 18:55:24.534880  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.538697  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:55:24.538963  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:55:24.588756  593872 cri.go:89] found id: "6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:24.588780  593872 cri.go:89] found id: ""
	I0920 18:55:24.588789  593872 logs.go:276] 1 containers: [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5]
	I0920 18:55:24.588877  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.592738  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:55:24.592830  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:55:24.634956  593872 cri.go:89] found id: "4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:24.634979  593872 cri.go:89] found id: ""
	I0920 18:55:24.634987  593872 logs.go:276] 1 containers: [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831]
	I0920 18:55:24.635066  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.638509  593872 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:55:24.638580  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:55:24.682689  593872 cri.go:89] found id: "b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:24.682712  593872 cri.go:89] found id: ""
	I0920 18:55:24.682720  593872 logs.go:276] 1 containers: [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b]
	I0920 18:55:24.682778  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.686419  593872 logs.go:123] Gathering logs for kube-controller-manager [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831] ...
	I0920 18:55:24.686490  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:24.769481  593872 logs.go:123] Gathering logs for kube-apiserver [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef] ...
	I0920 18:55:24.769516  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:24.824413  593872 logs.go:123] Gathering logs for etcd [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395] ...
	I0920 18:55:24.824464  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:24.873507  593872 logs.go:123] Gathering logs for coredns [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f] ...
	I0920 18:55:24.873540  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:24.928565  593872 logs.go:123] Gathering logs for kube-scheduler [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f] ...
	I0920 18:55:24.928603  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:24.972207  593872 logs.go:123] Gathering logs for kube-proxy [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5] ...
	I0920 18:55:24.972240  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:25.034067  593872 logs.go:123] Gathering logs for container status ...
	I0920 18:55:25.034101  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:55:25.088479  593872 logs.go:123] Gathering logs for kubelet ...
	I0920 18:55:25.088515  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:55:25.180642  593872 logs.go:123] Gathering logs for dmesg ...
	I0920 18:55:25.180679  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:55:25.197983  593872 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:55:25.198018  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:55:25.348415  593872 logs.go:123] Gathering logs for kindnet [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b] ...
	I0920 18:55:25.348488  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:25.396676  593872 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:55:25.396702  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:55:27.999369  593872 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 18:55:28.011064  593872 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 18:55:28.012493  593872 api_server.go:141] control plane version: v1.31.1
	I0920 18:55:28.012529  593872 api_server.go:131] duration metric: took 3.656113679s to wait for apiserver health ...
	I0920 18:55:28.012540  593872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:55:28.012573  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:55:28.012671  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:55:28.054623  593872 cri.go:89] found id: "8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:28.054647  593872 cri.go:89] found id: ""
	I0920 18:55:28.054656  593872 logs.go:276] 1 containers: [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef]
	I0920 18:55:28.054716  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.058765  593872 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:55:28.058859  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:55:28.103813  593872 cri.go:89] found id: "ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:28.103835  593872 cri.go:89] found id: ""
	I0920 18:55:28.103843  593872 logs.go:276] 1 containers: [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395]
	I0920 18:55:28.103902  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.107830  593872 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:55:28.107903  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:55:28.156157  593872 cri.go:89] found id: "1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:28.156183  593872 cri.go:89] found id: ""
	I0920 18:55:28.156191  593872 logs.go:276] 1 containers: [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f]
	I0920 18:55:28.156248  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.160447  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:55:28.160566  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:55:28.201058  593872 cri.go:89] found id: "0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:28.201081  593872 cri.go:89] found id: ""
	I0920 18:55:28.201089  593872 logs.go:276] 1 containers: [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f]
	I0920 18:55:28.201166  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.204832  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:55:28.204932  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:55:28.243472  593872 cri.go:89] found id: "6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:28.243506  593872 cri.go:89] found id: ""
	I0920 18:55:28.243516  593872 logs.go:276] 1 containers: [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5]
	I0920 18:55:28.243582  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.247662  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:55:28.247823  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:55:28.294254  593872 cri.go:89] found id: "4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:28.294288  593872 cri.go:89] found id: ""
	I0920 18:55:28.294297  593872 logs.go:276] 1 containers: [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831]
	I0920 18:55:28.294369  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.297872  593872 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:55:28.297956  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:55:28.336421  593872 cri.go:89] found id: "b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:28.336456  593872 cri.go:89] found id: ""
	I0920 18:55:28.336465  593872 logs.go:276] 1 containers: [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b]
	I0920 18:55:28.336532  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.340282  593872 logs.go:123] Gathering logs for kube-controller-manager [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831] ...
	I0920 18:55:28.340356  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:28.412211  593872 logs.go:123] Gathering logs for kindnet [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b] ...
	I0920 18:55:28.412251  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:28.460209  593872 logs.go:123] Gathering logs for container status ...
	I0920 18:55:28.460238  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:55:28.511508  593872 logs.go:123] Gathering logs for kubelet ...
	I0920 18:55:28.511544  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:55:28.604612  593872 logs.go:123] Gathering logs for etcd [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395] ...
	I0920 18:55:28.604650  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:28.654841  593872 logs.go:123] Gathering logs for coredns [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f] ...
	I0920 18:55:28.654872  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:28.695824  593872 logs.go:123] Gathering logs for kube-scheduler [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f] ...
	I0920 18:55:28.695854  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:28.738546  593872 logs.go:123] Gathering logs for kube-proxy [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5] ...
	I0920 18:55:28.738579  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:28.778897  593872 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:55:28.778928  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:55:28.872309  593872 logs.go:123] Gathering logs for dmesg ...
	I0920 18:55:28.872347  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:55:28.889387  593872 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:55:28.889419  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:55:29.037307  593872 logs.go:123] Gathering logs for kube-apiserver [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef] ...
	I0920 18:55:29.037336  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:31.614602  593872 system_pods.go:59] 18 kube-system pods found
	I0920 18:55:31.614646  593872 system_pods.go:61] "coredns-7c65d6cfc9-cl27s" [04689caf-fd31-41a8-b632-da305d969b77] Running
	I0920 18:55:31.614653  593872 system_pods.go:61] "csi-hostpath-attacher-0" [688a011d-4561-4c00-844b-6aa7f297a0aa] Running
	I0920 18:55:31.614658  593872 system_pods.go:61] "csi-hostpath-resizer-0" [106e8af5-f95f-436e-9fab-304f7ea18617] Running
	I0920 18:55:31.614663  593872 system_pods.go:61] "csi-hostpathplugin-7jhqn" [6803e01f-d3a5-4fe1-b76c-a936b8eb8a69] Running
	I0920 18:55:31.614667  593872 system_pods.go:61] "etcd-addons-060912" [f2728dff-aab5-4b32-bf02-93f8d2b5a6c1] Running
	I0920 18:55:31.614671  593872 system_pods.go:61] "kindnet-tl865" [9c700cfd-066f-47c6-aade-257d64dd87fd] Running
	I0920 18:55:31.614675  593872 system_pods.go:61] "kube-apiserver-addons-060912" [af9cd9b5-fbf4-4bb2-b6b8-58e119cc2e54] Running
	I0920 18:55:31.614679  593872 system_pods.go:61] "kube-controller-manager-addons-060912" [e2b17a09-a56a-42f3-885f-853c02ecc200] Running
	I0920 18:55:31.614683  593872 system_pods.go:61] "kube-ingress-dns-minikube" [1b76bbee-eac5-4d2e-b598-514d3650c987] Running
	I0920 18:55:31.614687  593872 system_pods.go:61] "kube-proxy-c522g" [3a56e42d-23c2-4774-b82c-3c6b2daa3a1f] Running
	I0920 18:55:31.614691  593872 system_pods.go:61] "kube-scheduler-addons-060912" [a6533c75-ea94-4da5-bb5e-7a23d9d92d69] Running
	I0920 18:55:31.614697  593872 system_pods.go:61] "metrics-server-84c5f94fbc-6n52n" [707188cc-7e99-491b-b510-82f0f9320fee] Running
	I0920 18:55:31.614703  593872 system_pods.go:61] "nvidia-device-plugin-daemonset-6c4pc" [70208489-2144-41c7-b72c-895d0344ccd9] Running
	I0920 18:55:31.614706  593872 system_pods.go:61] "registry-66c9cd494c-w8gt6" [ded46fe6-d8da-4546-81fd-d1f1949dcadb] Running
	I0920 18:55:31.614710  593872 system_pods.go:61] "registry-proxy-8ghgp" [5a98470b-31f7-4f1c-9586-f681f375453b] Running
	I0920 18:55:31.614714  593872 system_pods.go:61] "snapshot-controller-56fcc65765-r8g9v" [b22e42d4-0119-4486-b078-a8a3532a14c2] Running
	I0920 18:55:31.614717  593872 system_pods.go:61] "snapshot-controller-56fcc65765-wp8r8" [0aa17fbb-ebc2-41dc-8a5a-de69a6f62b73] Running
	I0920 18:55:31.614725  593872 system_pods.go:61] "storage-provisioner" [76adfe52-d569-4e95-82f8-414bc1dcbc24] Running
	I0920 18:55:31.614731  593872 system_pods.go:74] duration metric: took 3.602185872s to wait for pod list to return data ...
	I0920 18:55:31.614744  593872 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:55:31.617429  593872 default_sa.go:45] found service account: "default"
	I0920 18:55:31.617456  593872 default_sa.go:55] duration metric: took 2.706624ms for default service account to be created ...
	I0920 18:55:31.617465  593872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:55:31.627751  593872 system_pods.go:86] 18 kube-system pods found
	I0920 18:55:31.627789  593872 system_pods.go:89] "coredns-7c65d6cfc9-cl27s" [04689caf-fd31-41a8-b632-da305d969b77] Running
	I0920 18:55:31.627797  593872 system_pods.go:89] "csi-hostpath-attacher-0" [688a011d-4561-4c00-844b-6aa7f297a0aa] Running
	I0920 18:55:31.627803  593872 system_pods.go:89] "csi-hostpath-resizer-0" [106e8af5-f95f-436e-9fab-304f7ea18617] Running
	I0920 18:55:31.627808  593872 system_pods.go:89] "csi-hostpathplugin-7jhqn" [6803e01f-d3a5-4fe1-b76c-a936b8eb8a69] Running
	I0920 18:55:31.627813  593872 system_pods.go:89] "etcd-addons-060912" [f2728dff-aab5-4b32-bf02-93f8d2b5a6c1] Running
	I0920 18:55:31.627817  593872 system_pods.go:89] "kindnet-tl865" [9c700cfd-066f-47c6-aade-257d64dd87fd] Running
	I0920 18:55:31.627821  593872 system_pods.go:89] "kube-apiserver-addons-060912" [af9cd9b5-fbf4-4bb2-b6b8-58e119cc2e54] Running
	I0920 18:55:31.627826  593872 system_pods.go:89] "kube-controller-manager-addons-060912" [e2b17a09-a56a-42f3-885f-853c02ecc200] Running
	I0920 18:55:31.627831  593872 system_pods.go:89] "kube-ingress-dns-minikube" [1b76bbee-eac5-4d2e-b598-514d3650c987] Running
	I0920 18:55:31.627836  593872 system_pods.go:89] "kube-proxy-c522g" [3a56e42d-23c2-4774-b82c-3c6b2daa3a1f] Running
	I0920 18:55:31.627840  593872 system_pods.go:89] "kube-scheduler-addons-060912" [a6533c75-ea94-4da5-bb5e-7a23d9d92d69] Running
	I0920 18:55:31.627844  593872 system_pods.go:89] "metrics-server-84c5f94fbc-6n52n" [707188cc-7e99-491b-b510-82f0f9320fee] Running
	I0920 18:55:31.627863  593872 system_pods.go:89] "nvidia-device-plugin-daemonset-6c4pc" [70208489-2144-41c7-b72c-895d0344ccd9] Running
	I0920 18:55:31.627867  593872 system_pods.go:89] "registry-66c9cd494c-w8gt6" [ded46fe6-d8da-4546-81fd-d1f1949dcadb] Running
	I0920 18:55:31.627873  593872 system_pods.go:89] "registry-proxy-8ghgp" [5a98470b-31f7-4f1c-9586-f681f375453b] Running
	I0920 18:55:31.627879  593872 system_pods.go:89] "snapshot-controller-56fcc65765-r8g9v" [b22e42d4-0119-4486-b078-a8a3532a14c2] Running
	I0920 18:55:31.627884  593872 system_pods.go:89] "snapshot-controller-56fcc65765-wp8r8" [0aa17fbb-ebc2-41dc-8a5a-de69a6f62b73] Running
	I0920 18:55:31.627888  593872 system_pods.go:89] "storage-provisioner" [76adfe52-d569-4e95-82f8-414bc1dcbc24] Running
	I0920 18:55:31.627898  593872 system_pods.go:126] duration metric: took 10.426903ms to wait for k8s-apps to be running ...
	I0920 18:55:31.627918  593872 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:55:31.627995  593872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:55:31.639886  593872 system_svc.go:56] duration metric: took 11.957384ms WaitForService to wait for kubelet
	I0920 18:55:31.639916  593872 kubeadm.go:582] duration metric: took 2m24.981492962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:55:31.639936  593872 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:55:31.643318  593872 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 18:55:31.643354  593872 node_conditions.go:123] node cpu capacity is 2
	I0920 18:55:31.643367  593872 node_conditions.go:105] duration metric: took 3.425286ms to run NodePressure ...
	I0920 18:55:31.643399  593872 start.go:241] waiting for startup goroutines ...
	I0920 18:55:31.643414  593872 start.go:246] waiting for cluster config update ...
	I0920 18:55:31.643431  593872 start.go:255] writing updated cluster config ...
	I0920 18:55:31.643750  593872 ssh_runner.go:195] Run: rm -f paused
	I0920 18:55:31.999069  593872 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:55:32.001537  593872 out.go:177] * Done! kubectl is now configured to use "addons-060912" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 19:07:27 addons-060912 crio[963]: time="2024-09-20 19:07:27.557710469Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 20 19:07:27 addons-060912 crio[963]: time="2024-09-20 19:07:27.580837405Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c631290a728bcb45eddd5e36e995ce7f8f25738c42aa06514fdd7d52b8e29aaf/merged/etc/passwd: no such file or directory"
	Sep 20 19:07:27 addons-060912 crio[963]: time="2024-09-20 19:07:27.580882402Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c631290a728bcb45eddd5e36e995ce7f8f25738c42aa06514fdd7d52b8e29aaf/merged/etc/group: no such file or directory"
	Sep 20 19:07:27 addons-060912 crio[963]: time="2024-09-20 19:07:27.623774954Z" level=info msg="Created container 167a4fd5fe5fecdf605549e28bec2fc2e9540a0889dd06c290e9fadaa8eeb52e: default/hello-world-app-55bf9c44b4-92l5t/hello-world-app" id=0dbd885c-f33f-4392-b0cb-10848f0ce147 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 20 19:07:27 addons-060912 crio[963]: time="2024-09-20 19:07:27.624575977Z" level=info msg="Starting container: 167a4fd5fe5fecdf605549e28bec2fc2e9540a0889dd06c290e9fadaa8eeb52e" id=7c43a09e-e3fd-493c-bcc2-8ac653fd943c name=/runtime.v1.RuntimeService/StartContainer
	Sep 20 19:07:27 addons-060912 crio[963]: time="2024-09-20 19:07:27.633587317Z" level=info msg="Started container" PID=8349 containerID=167a4fd5fe5fecdf605549e28bec2fc2e9540a0889dd06c290e9fadaa8eeb52e description=default/hello-world-app-55bf9c44b4-92l5t/hello-world-app id=7c43a09e-e3fd-493c-bcc2-8ac653fd943c name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7bcf6d431e239ec0e1d403243f9a1ec4227954a9f294f9c06fb800c2624ca45
	Sep 20 19:07:28 addons-060912 crio[963]: time="2024-09-20 19:07:28.086510951Z" level=info msg="Removing container: 36c7137a4230fdcf156003e3731f3130fd485f311315a7290c9f5b752c822b57" id=a844527d-c508-4a36-9d32-b5a3ef7cae00 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:07:28 addons-060912 crio[963]: time="2024-09-20 19:07:28.102957315Z" level=info msg="Removed container 36c7137a4230fdcf156003e3731f3130fd485f311315a7290c9f5b752c822b57: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=a844527d-c508-4a36-9d32-b5a3ef7cae00 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:07:29 addons-060912 crio[963]: time="2024-09-20 19:07:29.828727467Z" level=info msg="Stopping container: 85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6 (timeout: 2s)" id=a1092dfd-030d-4193-a328-c816db7ff9ed name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:07:30 addons-060912 crio[963]: time="2024-09-20 19:07:30.400451099Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=43778edb-35be-48b8-a837-0184c5ee33bd name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:07:30 addons-060912 crio[963]: time="2024-09-20 19:07:30.400678938Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=43778edb-35be-48b8-a837-0184c5ee33bd name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:07:31 addons-060912 crio[963]: time="2024-09-20 19:07:31.836253647Z" level=warning msg="Stopping container 85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=a1092dfd-030d-4193-a328-c816db7ff9ed name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:07:31 addons-060912 conmon[5012]: conmon 85695c2824bbf86e4a82 <ninfo>: container 5024 exited with status 137
	Sep 20 19:07:31 addons-060912 crio[963]: time="2024-09-20 19:07:31.981186718Z" level=info msg="Stopped container 85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6: ingress-nginx/ingress-nginx-controller-bc57996ff-xg7x4/controller" id=a1092dfd-030d-4193-a328-c816db7ff9ed name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:07:31 addons-060912 crio[963]: time="2024-09-20 19:07:31.981730749Z" level=info msg="Stopping pod sandbox: 70f947431e6fa7d83bb8e364103601a55e16c67c70afc664d53f81453088ed14" id=33f0f470-06cd-48e9-8b48-06a78d198f1d name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:07:31 addons-060912 crio[963]: time="2024-09-20 19:07:31.986137006Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-K453HVBH6RYHPZP7 - [0:0]\n:KUBE-HP-3H6EY67RHFIH2R7J - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-3H6EY67RHFIH2R7J\n-X KUBE-HP-K453HVBH6RYHPZP7\nCOMMIT\n"
	Sep 20 19:07:31 addons-060912 crio[963]: time="2024-09-20 19:07:31.988056262Z" level=info msg="Closing host port tcp:80"
	Sep 20 19:07:31 addons-060912 crio[963]: time="2024-09-20 19:07:31.988109480Z" level=info msg="Closing host port tcp:443"
	Sep 20 19:07:31 addons-060912 crio[963]: time="2024-09-20 19:07:31.989460270Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 20 19:07:31 addons-060912 crio[963]: time="2024-09-20 19:07:31.989489013Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 20 19:07:31 addons-060912 crio[963]: time="2024-09-20 19:07:31.989694969Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-xg7x4 Namespace:ingress-nginx ID:70f947431e6fa7d83bb8e364103601a55e16c67c70afc664d53f81453088ed14 UID:5991ce09-b48a-4443-b4d7-483c6ff98c74 NetNS:/var/run/netns/4b15d4d5-3528-40b7-b059-9e1c2568c715 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 20 19:07:31 addons-060912 crio[963]: time="2024-09-20 19:07:31.989835301Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-xg7x4 from CNI network \"kindnet\" (type=ptp)"
	Sep 20 19:07:32 addons-060912 crio[963]: time="2024-09-20 19:07:32.017118099Z" level=info msg="Stopped pod sandbox: 70f947431e6fa7d83bb8e364103601a55e16c67c70afc664d53f81453088ed14" id=33f0f470-06cd-48e9-8b48-06a78d198f1d name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:07:32 addons-060912 crio[963]: time="2024-09-20 19:07:32.097863652Z" level=info msg="Removing container: 85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6" id=8848fd34-f05e-4d80-8ac1-689772b8b2a3 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 19:07:32 addons-060912 crio[963]: time="2024-09-20 19:07:32.112723722Z" level=info msg="Removed container 85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6: ingress-nginx/ingress-nginx-controller-bc57996ff-xg7x4/controller" id=8848fd34-f05e-4d80-8ac1-689772b8b2a3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	167a4fd5fe5fe       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app            0                   f7bcf6d431e23       hello-world-app-55bf9c44b4-92l5t
	bb932ecde10ba       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                              2 minutes ago       Running             nginx                      0                   d3a5d7aa5e6d5       nginx
	4a43484742705       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 12 minutes ago      Running             gcp-auth                   0                   41279eea3be85       gcp-auth-89d5ffd79-lnzdp
	82b19cfe4aa53       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             12 minutes ago      Exited              patch                      3                   23fd631d639d0       ingress-nginx-admission-patch-fdtb4
	61ffa12ea9f4d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   13 minutes ago      Exited              create                     0                   5d21c325f34e3       ingress-nginx-admission-create-c2ktk
	ec4a2ebde1d92       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              13 minutes ago      Running             yakd                       0                   307ea4d8792ab       yakd-dashboard-67d98fc6b-v89pf
	6c859b6d092c6       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             13 minutes ago      Running             local-path-provisioner     0                   f2596fcb2f979       local-path-provisioner-86d989889c-4phmr
	26abc82a1efc9       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        13 minutes ago      Running             metrics-server             0                   d47ebd6d1ffd2       metrics-server-84c5f94fbc-6n52n
	858722a918b70       gcr.io/cloud-spanner-emulator/emulator@sha256:6ce1265c73355797b34d2531c7146eed3996346f860517e35d1434182eb5f01d               13 minutes ago      Running             cloud-spanner-emulator     0                   9b0c7737cb9ee       cloud-spanner-emulator-5b584cc74-77rvl
	b892d5aeaafb2       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                     13 minutes ago      Running             nvidia-device-plugin-ctr   0                   cf2072a44be11       nvidia-device-plugin-daemonset-6c4pc
	b6bb91f96aedc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             13 minutes ago      Running             storage-provisioner        0                   8bd9bba6c8fc6       storage-provisioner
	1a880bc579bf0       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             13 minutes ago      Running             coredns                    0                   5b3730f2d41b7       coredns-7c65d6cfc9-cl27s
	b8685b3b7a398       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             14 minutes ago      Running             kindnet-cni                0                   a3e64840ab606       kindnet-tl865
	6b08aa03c509c       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             14 minutes ago      Running             kube-proxy                 0                   16ec6dded1779       kube-proxy-c522g
	4ecd6cb0f6955       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             14 minutes ago      Running             kube-controller-manager    0                   cf3a116aeab5b       kube-controller-manager-addons-060912
	0f324b0fef4f9       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             14 minutes ago      Running             kube-scheduler             0                   3d36f26aa452e       kube-scheduler-addons-060912
	8bee65ae4a888       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             14 minutes ago      Running             kube-apiserver             0                   33b4572492492       kube-apiserver-addons-060912
	ea2efa9e4710b       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             14 minutes ago      Running             etcd                       0                   f7b5fa9394991       etcd-addons-060912
	
	
	==> coredns [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f] <==
	[INFO] 10.244.0.18:55683 - 36665 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078695s
	[INFO] 10.244.0.18:40033 - 64165 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002757275s
	[INFO] 10.244.0.18:40033 - 10146 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002422465s
	[INFO] 10.244.0.18:44258 - 36529 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00067368s
	[INFO] 10.244.0.18:44258 - 3251 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000648646s
	[INFO] 10.244.0.18:48701 - 30933 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000114313s
	[INFO] 10.244.0.18:48701 - 3798 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177706s
	[INFO] 10.244.0.18:43291 - 11795 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065912s
	[INFO] 10.244.0.18:43291 - 45806 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000060553s
	[INFO] 10.244.0.18:54945 - 47277 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055015s
	[INFO] 10.244.0.18:54945 - 42927 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080122s
	[INFO] 10.244.0.18:54866 - 8361 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001695403s
	[INFO] 10.244.0.18:54866 - 41643 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001756359s
	[INFO] 10.244.0.18:33956 - 27160 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000067043s
	[INFO] 10.244.0.18:33956 - 20762 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000052537s
	[INFO] 10.244.0.20:52499 - 34827 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000205341s
	[INFO] 10.244.0.20:36942 - 16052 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000363782s
	[INFO] 10.244.0.20:52995 - 29444 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000351162s
	[INFO] 10.244.0.20:44078 - 60085 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000383892s
	[INFO] 10.244.0.20:54831 - 11107 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000215918s
	[INFO] 10.244.0.20:42723 - 50453 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000198564s
	[INFO] 10.244.0.20:33980 - 22876 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003708615s
	[INFO] 10.244.0.20:36030 - 39141 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003676779s
	[INFO] 10.244.0.20:46057 - 16877 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005906983s
	[INFO] 10.244.0.20:59156 - 51441 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.005414694s
	
	
	==> describe nodes <==
	Name:               addons-060912
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-060912
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=addons-060912
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_53_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-060912
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:52:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-060912
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:07:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:05:35 +0000   Fri, 20 Sep 2024 18:52:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:05:35 +0000   Fri, 20 Sep 2024 18:52:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:05:35 +0000   Fri, 20 Sep 2024 18:52:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:05:35 +0000   Fri, 20 Sep 2024 18:53:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-060912
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 10dc14ff36a34258b0be727d4ac3c9e0
	  System UUID:                f67c7638-9fc9-4a4c-946b-9e8a422e1126
	  Boot ID:                    b363b069-6c72-47b0-a80b-36cf6b75e261
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     cloud-spanner-emulator-5b584cc74-77rvl     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-92l5t           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  gcp-auth                    gcp-auth-89d5ffd79-lnzdp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-cl27s                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-addons-060912                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-tl865                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-060912               250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-060912      200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-c522g                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-060912               100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-6n52n            100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         14m
	  kube-system                 nvidia-device-plugin-daemonset-6c4pc       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-4phmr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-v89pf             0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             548Mi (6%)  476Mi (6%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node addons-060912 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node addons-060912 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node addons-060912 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node addons-060912 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node addons-060912 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node addons-060912 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                node-controller  Node addons-060912 event: Registered Node addons-060912 in Controller
	  Normal   NodeReady                13m                kubelet          Node addons-060912 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395] <==
	{"level":"info","ts":"2024-09-20T18:52:55.959369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-20T18:52:55.959404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T18:52:55.963242Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-060912 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T18:52:55.963430Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:52:55.963737Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:55.967032Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:52:55.967327Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:52:55.967356Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:52:55.967795Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:52:55.967940Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:52:55.968794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:52:55.971183Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:55.975635Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:55.975703Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:55.979680Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T18:53:07.939493Z","caller":"traceutil/trace.go:171","msg":"trace[436272735] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"123.951195ms","start":"2024-09-20T18:53:07.815524Z","end":"2024-09-20T18:53:07.939475Z","steps":["trace[436272735] 'process raft request'  (duration: 87.482157ms)","trace[436272735] 'compare'  (duration: 36.052588ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:53:08.037242Z","caller":"traceutil/trace.go:171","msg":"trace[849823426] linearizableReadLoop","detail":"{readStateIndex:392; appliedIndex:391; }","duration":"221.629804ms","start":"2024-09-20T18:53:07.815591Z","end":"2024-09-20T18:53:08.037220Z","steps":["trace[849823426] 'read index received'  (duration: 447.974µs)","trace[849823426] 'applied index is now lower than readState.Index'  (duration: 221.179918ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:53:08.037369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.740244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-09-20T18:53:08.159742Z","caller":"traceutil/trace.go:171","msg":"trace[402661395] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:385; }","duration":"282.386041ms","start":"2024-09-20T18:53:07.815587Z","end":"2024-09-20T18:53:08.097973Z","steps":["trace[402661395] 'agreement among raft nodes before linearized reading'  (duration: 221.690792ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:53:08.159844Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:53:07.815565Z","time spent":"344.255539ms","remote":"127.0.0.1:37374","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":636,"request content":"key:\"/registry/configmaps/kube-system/coredns\" "}
	{"level":"warn","ts":"2024-09-20T18:53:09.679754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.759103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4034"}
	{"level":"info","ts":"2024-09-20T18:53:09.680079Z","caller":"traceutil/trace.go:171","msg":"trace[154019258] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:389; }","duration":"118.292131ms","start":"2024-09-20T18:53:09.561774Z","end":"2024-09-20T18:53:09.680066Z","steps":["trace[154019258] 'range keys from in-memory index tree'  (duration: 117.688178ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:02:56.467684Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1571}
	{"level":"info","ts":"2024-09-20T19:02:56.500782Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1571,"took":"32.660846ms","hash":1649838481,"current-db-size-bytes":6402048,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3543040,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-20T19:02:56.500833Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1649838481,"revision":1571,"compact-revision":-1}
	
	
	==> gcp-auth [4a43484742705aed20cd218f80a63f0e4090a96ee4ee0cef03af1f076f0bfd2b] <==
	2024/09/20 18:55:04 GCP Auth Webhook started!
	2024/09/20 18:55:32 Ready to marshal response ...
	2024/09/20 18:55:32 Ready to write response ...
	2024/09/20 18:55:32 Ready to marshal response ...
	2024/09/20 18:55:32 Ready to write response ...
	2024/09/20 18:55:32 Ready to marshal response ...
	2024/09/20 18:55:32 Ready to write response ...
	2024/09/20 19:03:36 Ready to marshal response ...
	2024/09/20 19:03:36 Ready to write response ...
	2024/09/20 19:03:36 Ready to marshal response ...
	2024/09/20 19:03:36 Ready to write response ...
	2024/09/20 19:03:36 Ready to marshal response ...
	2024/09/20 19:03:36 Ready to write response ...
	2024/09/20 19:03:47 Ready to marshal response ...
	2024/09/20 19:03:47 Ready to write response ...
	2024/09/20 19:04:03 Ready to marshal response ...
	2024/09/20 19:04:03 Ready to write response ...
	2024/09/20 19:04:37 Ready to marshal response ...
	2024/09/20 19:04:37 Ready to write response ...
	2024/09/20 19:05:07 Ready to marshal response ...
	2024/09/20 19:05:07 Ready to write response ...
	2024/09/20 19:07:26 Ready to marshal response ...
	2024/09/20 19:07:26 Ready to write response ...
	
	
	==> kernel <==
	 19:07:37 up  2:50,  0 users,  load average: 0.23, 0.50, 1.33
	Linux addons-060912 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b] <==
	I0920 19:05:29.469678       1 main.go:299] handling current node
	I0920 19:05:39.469631       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:05:39.469667       1 main.go:299] handling current node
	I0920 19:05:49.476638       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:05:49.476681       1 main.go:299] handling current node
	I0920 19:05:59.471226       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:05:59.471259       1 main.go:299] handling current node
	I0920 19:06:09.470614       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:06:09.470648       1 main.go:299] handling current node
	I0920 19:06:19.476568       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:06:19.476606       1 main.go:299] handling current node
	I0920 19:06:29.469625       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:06:29.469657       1 main.go:299] handling current node
	I0920 19:06:39.469613       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:06:39.469648       1 main.go:299] handling current node
	I0920 19:06:49.469607       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:06:49.469643       1 main.go:299] handling current node
	I0920 19:06:59.475109       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:06:59.475147       1 main.go:299] handling current node
	I0920 19:07:09.470651       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:07:09.470687       1 main.go:299] handling current node
	I0920 19:07:19.469625       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:07:19.469662       1 main.go:299] handling current node
	I0920 19:07:29.469640       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:07:29.469764       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 18:55:20.383384       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.60.42:443: connect: connection refused" logger="UnhandledError"
	E0920 18:55:20.385944       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.60.42:443: connect: connection refused" logger="UnhandledError"
	E0920 18:55:20.391750       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.60.42:443: connect: connection refused" logger="UnhandledError"
	I0920 18:55:20.475621       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 19:03:36.739548       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.251.149"}
	I0920 19:04:15.141850       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 19:04:54.811710       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:04:54.811767       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:04:54.840211       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:04:54.840257       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:04:54.872229       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:04:54.872286       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:04:55.122560       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:04:55.122691       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 19:04:55.850688       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 19:04:56.123803       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 19:04:56.145936       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0920 19:05:01.083961       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 19:05:02.112962       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 19:05:06.770356       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 19:05:07.101143       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.191.188"}
	I0920 19:07:26.475827       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.94.229"}
	E0920 19:07:28.875842       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831] <==
	W0920 19:06:06.450627       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:06:06.450671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:06:07.148539       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:06:07.148583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:06:17.515612       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:06:17.515656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:06:21.375585       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:06:21.375628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:06:53.584915       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:06:53.584959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:06:55.874116       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:06:55.874159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:07:12.622375       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:07:12.622421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:07:15.015068       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:07:15.015217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 19:07:26.252171       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="47.710254ms"
	I0920 19:07:26.283327       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.033384ms"
	I0920 19:07:26.284170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="48.616µs"
	I0920 19:07:26.291001       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="50.666µs"
	I0920 19:07:28.126466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.89183ms"
	I0920 19:07:28.126622       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.376µs"
	I0920 19:07:28.797961       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0920 19:07:28.803620       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0920 19:07:28.807556       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="6.712µs"
	
	
	==> kube-proxy [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5] <==
	I0920 18:53:11.974563       1 server_linux.go:66] "Using iptables proxy"
	I0920 18:53:12.292405       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 18:53:12.292610       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:53:12.395134       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 18:53:12.395264       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:53:12.411910       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:53:12.466911       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:53:12.467076       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:53:12.468804       1 config.go:199] "Starting service config controller"
	I0920 18:53:12.468933       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:53:12.469006       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:53:12.469013       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:53:12.469600       1 config.go:328] "Starting node config controller"
	I0920 18:53:12.469649       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:53:12.569297       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:53:12.579772       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:53:12.619741       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f] <==
	W0920 18:52:59.368286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 18:52:59.368811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.368955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:52:59.369008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.369112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:52:59.369165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.369263       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0920 18:52:59.368369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 18:52:59.369732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.369322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.370408       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:52:59.371930       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 18:52:59.373052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 18:52:59.374756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.374808       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0920 18:52:59.373291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0920 18:52:59.373528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0920 18:52:59.373573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0920 18:52:59.374022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:52:59.376432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.376123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.376613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.376147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.377169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 18:53:00.561937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:07:27 addons-060912 kubelet[1488]: I0920 19:07:27.660157    1488 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmmwq\" (UniqueName: \"kubernetes.io/projected/1b76bbee-eac5-4d2e-b598-514d3650c987-kube-api-access-zmmwq\") pod \"1b76bbee-eac5-4d2e-b598-514d3650c987\" (UID: \"1b76bbee-eac5-4d2e-b598-514d3650c987\") "
	Sep 20 19:07:27 addons-060912 kubelet[1488]: I0920 19:07:27.662028    1488 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b76bbee-eac5-4d2e-b598-514d3650c987-kube-api-access-zmmwq" (OuterVolumeSpecName: "kube-api-access-zmmwq") pod "1b76bbee-eac5-4d2e-b598-514d3650c987" (UID: "1b76bbee-eac5-4d2e-b598-514d3650c987"). InnerVolumeSpecName "kube-api-access-zmmwq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 19:07:27 addons-060912 kubelet[1488]: I0920 19:07:27.761466    1488 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zmmwq\" (UniqueName: \"kubernetes.io/projected/1b76bbee-eac5-4d2e-b598-514d3650c987-kube-api-access-zmmwq\") on node \"addons-060912\" DevicePath \"\""
	Sep 20 19:07:28 addons-060912 kubelet[1488]: I0920 19:07:28.084749    1488 scope.go:117] "RemoveContainer" containerID="36c7137a4230fdcf156003e3731f3130fd485f311315a7290c9f5b752c822b57"
	Sep 20 19:07:28 addons-060912 kubelet[1488]: I0920 19:07:28.103872    1488 scope.go:117] "RemoveContainer" containerID="36c7137a4230fdcf156003e3731f3130fd485f311315a7290c9f5b752c822b57"
	Sep 20 19:07:28 addons-060912 kubelet[1488]: E0920 19:07:28.104457    1488 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36c7137a4230fdcf156003e3731f3130fd485f311315a7290c9f5b752c822b57\": container with ID starting with 36c7137a4230fdcf156003e3731f3130fd485f311315a7290c9f5b752c822b57 not found: ID does not exist" containerID="36c7137a4230fdcf156003e3731f3130fd485f311315a7290c9f5b752c822b57"
	Sep 20 19:07:28 addons-060912 kubelet[1488]: I0920 19:07:28.104501    1488 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c7137a4230fdcf156003e3731f3130fd485f311315a7290c9f5b752c822b57"} err="failed to get container status \"36c7137a4230fdcf156003e3731f3130fd485f311315a7290c9f5b752c822b57\": rpc error: code = NotFound desc = could not find container \"36c7137a4230fdcf156003e3731f3130fd485f311315a7290c9f5b752c822b57\": container with ID starting with 36c7137a4230fdcf156003e3731f3130fd485f311315a7290c9f5b752c822b57 not found: ID does not exist"
	Sep 20 19:07:28 addons-060912 kubelet[1488]: I0920 19:07:28.106431    1488 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-92l5t" podStartSLOduration=1.170179296 podStartE2EDuration="2.106415775s" podCreationTimestamp="2024-09-20 19:07:26 +0000 UTC" firstStartedPulling="2024-09-20 19:07:26.618629288 +0000 UTC m=+865.316892516" lastFinishedPulling="2024-09-20 19:07:27.554865775 +0000 UTC m=+866.253128995" observedRunningTime="2024-09-20 19:07:28.10598353 +0000 UTC m=+866.804246750" watchObservedRunningTime="2024-09-20 19:07:28.106415775 +0000 UTC m=+866.804678995"
	Sep 20 19:07:29 addons-060912 kubelet[1488]: I0920 19:07:29.401660    1488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b76bbee-eac5-4d2e-b598-514d3650c987" path="/var/lib/kubelet/pods/1b76bbee-eac5-4d2e-b598-514d3650c987/volumes"
	Sep 20 19:07:29 addons-060912 kubelet[1488]: I0920 19:07:29.402050    1488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a2914bb-bc0f-4ff4-84c1-5045b233e200" path="/var/lib/kubelet/pods/7a2914bb-bc0f-4ff4-84c1-5045b233e200/volumes"
	Sep 20 19:07:29 addons-060912 kubelet[1488]: I0920 19:07:29.402448    1488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9beae734-bca3-4118-9b8e-06013f76a591" path="/var/lib/kubelet/pods/9beae734-bca3-4118-9b8e-06013f76a591/volumes"
	Sep 20 19:07:30 addons-060912 kubelet[1488]: E0920 19:07:30.400945    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="06f0a745-85f9-4338-bb9f-bce49e7ec861"
	Sep 20 19:07:31 addons-060912 kubelet[1488]: E0920 19:07:31.728721    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859251728517800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543503,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:07:31 addons-060912 kubelet[1488]: E0920 19:07:31.728752    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859251728517800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543503,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:07:32 addons-060912 kubelet[1488]: I0920 19:07:32.095641    1488 scope.go:117] "RemoveContainer" containerID="85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6"
	Sep 20 19:07:32 addons-060912 kubelet[1488]: I0920 19:07:32.113119    1488 scope.go:117] "RemoveContainer" containerID="85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6"
	Sep 20 19:07:32 addons-060912 kubelet[1488]: E0920 19:07:32.113549    1488 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6\": container with ID starting with 85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6 not found: ID does not exist" containerID="85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6"
	Sep 20 19:07:32 addons-060912 kubelet[1488]: I0920 19:07:32.113590    1488 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6"} err="failed to get container status \"85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6\": rpc error: code = NotFound desc = could not find container \"85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6\": container with ID starting with 85695c2824bbf86e4a8288a029345868270a36f764bc79694cef8bac756cceb6 not found: ID does not exist"
	Sep 20 19:07:32 addons-060912 kubelet[1488]: I0920 19:07:32.192156    1488 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr89q\" (UniqueName: \"kubernetes.io/projected/5991ce09-b48a-4443-b4d7-483c6ff98c74-kube-api-access-tr89q\") pod \"5991ce09-b48a-4443-b4d7-483c6ff98c74\" (UID: \"5991ce09-b48a-4443-b4d7-483c6ff98c74\") "
	Sep 20 19:07:32 addons-060912 kubelet[1488]: I0920 19:07:32.192214    1488 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5991ce09-b48a-4443-b4d7-483c6ff98c74-webhook-cert\") pod \"5991ce09-b48a-4443-b4d7-483c6ff98c74\" (UID: \"5991ce09-b48a-4443-b4d7-483c6ff98c74\") "
	Sep 20 19:07:32 addons-060912 kubelet[1488]: I0920 19:07:32.194415    1488 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5991ce09-b48a-4443-b4d7-483c6ff98c74-kube-api-access-tr89q" (OuterVolumeSpecName: "kube-api-access-tr89q") pod "5991ce09-b48a-4443-b4d7-483c6ff98c74" (UID: "5991ce09-b48a-4443-b4d7-483c6ff98c74"). InnerVolumeSpecName "kube-api-access-tr89q". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 19:07:32 addons-060912 kubelet[1488]: I0920 19:07:32.195090    1488 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5991ce09-b48a-4443-b4d7-483c6ff98c74-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5991ce09-b48a-4443-b4d7-483c6ff98c74" (UID: "5991ce09-b48a-4443-b4d7-483c6ff98c74"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 20 19:07:32 addons-060912 kubelet[1488]: I0920 19:07:32.293511    1488 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tr89q\" (UniqueName: \"kubernetes.io/projected/5991ce09-b48a-4443-b4d7-483c6ff98c74-kube-api-access-tr89q\") on node \"addons-060912\" DevicePath \"\""
	Sep 20 19:07:32 addons-060912 kubelet[1488]: I0920 19:07:32.293551    1488 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5991ce09-b48a-4443-b4d7-483c6ff98c74-webhook-cert\") on node \"addons-060912\" DevicePath \"\""
	Sep 20 19:07:33 addons-060912 kubelet[1488]: I0920 19:07:33.402386    1488 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5991ce09-b48a-4443-b4d7-483c6ff98c74" path="/var/lib/kubelet/pods/5991ce09-b48a-4443-b4d7-483c6ff98c74/volumes"
	
	
	==> storage-provisioner [b6bb91f96aedcf859be9e5aeb0d364423ca21915d0fb376bd36caefb6936c622] <==
	I0920 18:53:50.915207       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:53:50.945131       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:53:50.945260       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:53:50.953155       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:53:50.953416       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-060912_3157c921-ea39-49b8-87b1-669c9d4d53b9!
	I0920 18:53:50.953624       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e274d82f-245d-49e4-a33f-104ef4bee3c3", APIVersion:"v1", ResourceVersion:"947", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-060912_3157c921-ea39-49b8-87b1-669c9d4d53b9 became leader
	I0920 18:53:51.053580       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-060912_3157c921-ea39-49b8-87b1-669c9d4d53b9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-060912 -n addons-060912
helpers_test.go:261: (dbg) Run:  kubectl --context addons-060912 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-060912 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-060912 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-060912/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 18:55:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hgwr8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hgwr8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  12m                  default-scheduler  Successfully assigned default/busybox to addons-060912
	  Normal   Pulling    10m (x4 over 12m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 12m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 12m)    kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 12m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    117s (x41 over 12m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.08s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (350.48s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.434985ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-6n52n" [707188cc-7e99-491b-b510-82f0f9320fee] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00373449s
addons_test.go:413: (dbg) Run:  kubectl --context addons-060912 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-060912 top pods -n kube-system: exit status 1 (93.578988ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cl27s, age: 11m50.318046661s

                                                
                                                
** /stderr **
I0920 19:04:56.321969  593105 retry.go:31] will retry after 3.986603128s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-060912 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-060912 top pods -n kube-system: exit status 1 (269.738917ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cl27s, age: 11m54.569026099s

                                                
                                                
** /stderr **
I0920 19:05:00.578783  593105 retry.go:31] will retry after 4.598657674s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-060912 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-060912 top pods -n kube-system: exit status 1 (89.881251ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cl27s, age: 11m59.26521696s

                                                
                                                
** /stderr **
I0920 19:05:05.268538  593105 retry.go:31] will retry after 4.576028026s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-060912 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-060912 top pods -n kube-system: exit status 1 (91.640828ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cl27s, age: 12m3.933351516s

                                                
                                                
** /stderr **
I0920 19:05:09.936577  593105 retry.go:31] will retry after 8.042749736s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-060912 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-060912 top pods -n kube-system: exit status 1 (96.620819ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cl27s, age: 12m12.07373708s

                                                
                                                
** /stderr **
I0920 19:05:18.077006  593105 retry.go:31] will retry after 11.183876489s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-060912 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-060912 top pods -n kube-system: exit status 1 (85.635874ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cl27s, age: 12m23.347449929s

                                                
                                                
** /stderr **
I0920 19:05:29.350728  593105 retry.go:31] will retry after 26.627727942s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-060912 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-060912 top pods -n kube-system: exit status 1 (92.130248ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cl27s, age: 12m50.067912557s

                                                
                                                
** /stderr **
I0920 19:05:56.071400  593105 retry.go:31] will retry after 32.994374041s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-060912 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-060912 top pods -n kube-system: exit status 1 (89.036432ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cl27s, age: 13m23.151532332s

                                                
                                                
** /stderr **
I0920 19:06:29.155190  593105 retry.go:31] will retry after 58.849364937s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-060912 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-060912 top pods -n kube-system: exit status 1 (117.279631ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cl27s, age: 14m22.118253273s

                                                
                                                
** /stderr **
I0920 19:07:28.123977  593105 retry.go:31] will retry after 38.949617444s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-060912 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-060912 top pods -n kube-system: exit status 1 (87.108852ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cl27s, age: 15m1.158158618s

                                                
                                                
** /stderr **
I0920 19:08:07.161361  593105 retry.go:31] will retry after 1m16.610640547s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-060912 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-060912 top pods -n kube-system: exit status 1 (90.126263ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cl27s, age: 16m17.859651063s

                                                
                                                
** /stderr **
I0920 19:09:23.862826  593105 retry.go:31] will retry after 1m14.369346425s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-060912 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-060912 top pods -n kube-system: exit status 1 (83.757948ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cl27s, age: 17m32.313981662s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-060912
helpers_test.go:235: (dbg) docker inspect addons-060912:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5",
	        "Created": "2024-09-20T18:52:39.740365125Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 594367,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T18:52:39.865408091Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5/hosts",
	        "LogPath": "/var/lib/docker/containers/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5/f46765527c333a446521ba67e0f639dac32f9f39e75a8b3a5e27f9a9da46b5f5-json.log",
	        "Name": "/addons-060912",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-060912:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-060912",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/68e9eff537701289758e11436d45f5a20dac5511c49bb17c6c279ea9a0f2ee99-init/diff:/var/lib/docker/overlay2/a92e9e9bba1980ffadfbad04ca227253691a545526e59e24c9fd42023a78d162/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68e9eff537701289758e11436d45f5a20dac5511c49bb17c6c279ea9a0f2ee99/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68e9eff537701289758e11436d45f5a20dac5511c49bb17c6c279ea9a0f2ee99/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68e9eff537701289758e11436d45f5a20dac5511c49bb17c6c279ea9a0f2ee99/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-060912",
	                "Source": "/var/lib/docker/volumes/addons-060912/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-060912",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-060912",
	                "name.minikube.sigs.k8s.io": "addons-060912",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e9d76a1d4f78b17f57be343ce89cd0030fce0fd6b21bfc9013be4de1e162bf8",
	            "SandboxKey": "/var/run/docker/netns/5e9d76a1d4f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-060912": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "01fa9f6b959f74a22901f7d7f124f8f0aa8983b8fa8db0965f1c5571e7649814",
	                    "EndpointID": "a39b41b3ad3e63a6fe1c844d5ffbf7cf765e19876c05de1e6494d1a2189fa00b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-060912",
	                        "f46765527c33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-060912 -n addons-060912
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-060912 logs -n 25: (1.5769274s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-447269                                                                     | download-only-447269   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| start   | --download-only -p                                                                          | download-docker-266880 | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | download-docker-266880                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-266880                                                                   | download-docker-266880 | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-083327   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | binary-mirror-083327                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44087                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-083327                                                                     | binary-mirror-083327   | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| addons  | enable dashboard -p                                                                         | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | addons-060912                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | addons-060912                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-060912 --wait=true                                                                | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:55 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:03 UTC | 20 Sep 24 19:03 UTC |
	|         | -p addons-060912                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-060912 addons disable                                                                | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:03 UTC | 20 Sep 24 19:03 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-060912 ip                                                                            | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:04 UTC | 20 Sep 24 19:04 UTC |
	| addons  | addons-060912 addons                                                                        | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:04 UTC | 20 Sep 24 19:04 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-060912 addons disable                                                                | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:04 UTC | 20 Sep 24 19:04 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-060912 addons                                                                        | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:04 UTC | 20 Sep 24 19:04 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:05 UTC | 20 Sep 24 19:05 UTC |
	|         | addons-060912                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-060912 ssh curl -s                                                                   | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-060912 ip                                                                            | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:07 UTC | 20 Sep 24 19:07 UTC |
	| addons  | addons-060912 addons disable                                                                | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:07 UTC | 20 Sep 24 19:07 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-060912 addons disable                                                                | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:07 UTC | 20 Sep 24 19:07 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:07 UTC | 20 Sep 24 19:07 UTC |
	|         | -p addons-060912                                                                            |                        |         |         |                     |                     |
	| addons  | addons-060912 addons disable                                                                | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:07 UTC | 20 Sep 24 19:07 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-060912 ssh cat                                                                       | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:08 UTC | 20 Sep 24 19:08 UTC |
	|         | /opt/local-path-provisioner/pvc-c835ab22-abe1-4560-b972-0a4131361751_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-060912 addons disable                                                                | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:08 UTC | 20 Sep 24 19:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:08 UTC | 20 Sep 24 19:08 UTC |
	|         | addons-060912                                                                               |                        |         |         |                     |                     |
	| addons  | addons-060912 addons                                                                        | addons-060912          | jenkins | v1.34.0 | 20 Sep 24 19:10 UTC | 20 Sep 24 19:10 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:52:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:52:15.407585  593872 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:52:15.407747  593872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:52:15.407757  593872 out.go:358] Setting ErrFile to fd 2...
	I0920 18:52:15.407763  593872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:52:15.408019  593872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
	I0920 18:52:15.408464  593872 out.go:352] Setting JSON to false
	I0920 18:52:15.409334  593872 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9286,"bootTime":1726849050,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 18:52:15.409413  593872 start.go:139] virtualization:  
	I0920 18:52:15.412765  593872 out.go:177] * [addons-060912] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 18:52:15.415653  593872 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 18:52:15.415768  593872 notify.go:220] Checking for updates...
	I0920 18:52:15.421427  593872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:52:15.424323  593872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	I0920 18:52:15.427237  593872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	I0920 18:52:15.429911  593872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 18:52:15.432646  593872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:52:15.435403  593872 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:52:15.470290  593872 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:52:15.470417  593872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:52:15.520925  593872 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 18:52:15.51145031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:52:15.521041  593872 docker.go:318] overlay module found
	I0920 18:52:15.523900  593872 out.go:177] * Using the docker driver based on user configuration
	I0920 18:52:15.526500  593872 start.go:297] selected driver: docker
	I0920 18:52:15.526517  593872 start.go:901] validating driver "docker" against <nil>
	I0920 18:52:15.526531  593872 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:52:15.527216  593872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:52:15.581330  593872 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 18:52:15.571863527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:52:15.581548  593872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:52:15.581786  593872 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:52:15.584366  593872 out.go:177] * Using Docker driver with root privileges
	I0920 18:52:15.587045  593872 cni.go:84] Creating CNI manager for ""
	I0920 18:52:15.587107  593872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:52:15.587121  593872 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:52:15.587223  593872 start.go:340] cluster config:
	{Name:addons-060912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:52:15.590219  593872 out.go:177] * Starting "addons-060912" primary control-plane node in "addons-060912" cluster
	I0920 18:52:15.592826  593872 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 18:52:15.595652  593872 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 18:52:15.598342  593872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:52:15.598399  593872 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0920 18:52:15.598412  593872 cache.go:56] Caching tarball of preloaded images
	I0920 18:52:15.598446  593872 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 18:52:15.598514  593872 preload.go:172] Found /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0920 18:52:15.598525  593872 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 18:52:15.598880  593872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/config.json ...
	I0920 18:52:15.598952  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/config.json: {Name:mk641e5e8bae111e7b0856105b10230ca65c9fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:15.614244  593872 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:52:15.614382  593872 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 18:52:15.614407  593872 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 18:52:15.614416  593872 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 18:52:15.614424  593872 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 18:52:15.614429  593872 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 18:52:32.649742  593872 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 18:52:32.649783  593872 cache.go:194] Successfully downloaded all kic artifacts
	I0920 18:52:32.649812  593872 start.go:360] acquireMachinesLock for addons-060912: {Name:mkdf9efeada37d375617519bd8189e870133c61c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 18:52:32.649937  593872 start.go:364] duration metric: took 105.149µs to acquireMachinesLock for "addons-060912"
	I0920 18:52:32.649968  593872 start.go:93] Provisioning new machine with config: &{Name:addons-060912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:52:32.650096  593872 start.go:125] createHost starting for "" (driver="docker")
	I0920 18:52:32.652781  593872 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 18:52:32.653060  593872 start.go:159] libmachine.API.Create for "addons-060912" (driver="docker")
	I0920 18:52:32.653099  593872 client.go:168] LocalClient.Create starting
	I0920 18:52:32.653230  593872 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem
	I0920 18:52:32.860960  593872 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/cert.pem
	I0920 18:52:33.807141  593872 cli_runner.go:164] Run: docker network inspect addons-060912 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 18:52:33.822909  593872 cli_runner.go:211] docker network inspect addons-060912 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 18:52:33.823003  593872 network_create.go:284] running [docker network inspect addons-060912] to gather additional debugging logs...
	I0920 18:52:33.823041  593872 cli_runner.go:164] Run: docker network inspect addons-060912
	W0920 18:52:33.836862  593872 cli_runner.go:211] docker network inspect addons-060912 returned with exit code 1
	I0920 18:52:33.836897  593872 network_create.go:287] error running [docker network inspect addons-060912]: docker network inspect addons-060912: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-060912 not found
	I0920 18:52:33.836912  593872 network_create.go:289] output of [docker network inspect addons-060912]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-060912 not found
	
	** /stderr **
	I0920 18:52:33.837018  593872 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:52:33.853516  593872 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400048fc60}
	I0920 18:52:33.853561  593872 network_create.go:124] attempt to create docker network addons-060912 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 18:52:33.853624  593872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-060912 addons-060912
	I0920 18:52:33.925138  593872 network_create.go:108] docker network addons-060912 192.168.49.0/24 created
	I0920 18:52:33.925170  593872 kic.go:121] calculated static IP "192.168.49.2" for the "addons-060912" container
	I0920 18:52:33.925251  593872 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 18:52:33.939300  593872 cli_runner.go:164] Run: docker volume create addons-060912 --label name.minikube.sigs.k8s.io=addons-060912 --label created_by.minikube.sigs.k8s.io=true
	I0920 18:52:33.956121  593872 oci.go:103] Successfully created a docker volume addons-060912
	I0920 18:52:33.956221  593872 cli_runner.go:164] Run: docker run --rm --name addons-060912-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-060912 --entrypoint /usr/bin/test -v addons-060912:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 18:52:35.542485  593872 cli_runner.go:217] Completed: docker run --rm --name addons-060912-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-060912 --entrypoint /usr/bin/test -v addons-060912:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (1.586222321s)
	I0920 18:52:35.542517  593872 oci.go:107] Successfully prepared a docker volume addons-060912
	I0920 18:52:35.542537  593872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:52:35.542557  593872 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 18:52:35.542630  593872 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-060912:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 18:52:39.667870  593872 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-060912:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.12519698s)
	I0920 18:52:39.667901  593872 kic.go:203] duration metric: took 4.125341455s to extract preloaded images to volume ...
	W0920 18:52:39.668057  593872 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 18:52:39.668171  593872 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 18:52:39.725179  593872 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-060912 --name addons-060912 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-060912 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-060912 --network addons-060912 --ip 192.168.49.2 --volume addons-060912:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 18:52:40.064748  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Running}}
	I0920 18:52:40.090550  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:52:40.119088  593872 cli_runner.go:164] Run: docker exec addons-060912 stat /var/lib/dpkg/alternatives/iptables
	I0920 18:52:40.194481  593872 oci.go:144] the created container "addons-060912" has a running status.
	I0920 18:52:40.194657  593872 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa...
	I0920 18:52:40.558917  593872 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 18:52:40.602421  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:52:40.629886  593872 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 18:52:40.629905  593872 kic_runner.go:114] Args: [docker exec --privileged addons-060912 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 18:52:40.708677  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:52:40.734009  593872 machine.go:93] provisionDockerMachine start ...
	I0920 18:52:40.734111  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:40.755383  593872 main.go:141] libmachine: Using SSH client type: native
	I0920 18:52:40.755665  593872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:52:40.755687  593872 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 18:52:40.930414  593872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-060912
	
	I0920 18:52:40.930441  593872 ubuntu.go:169] provisioning hostname "addons-060912"
	I0920 18:52:40.930507  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:40.955848  593872 main.go:141] libmachine: Using SSH client type: native
	I0920 18:52:40.956093  593872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:52:40.956114  593872 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-060912 && echo "addons-060912" | sudo tee /etc/hostname
	I0920 18:52:41.124769  593872 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-060912
	
	I0920 18:52:41.124926  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:41.150096  593872 main.go:141] libmachine: Using SSH client type: native
	I0920 18:52:41.150348  593872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:52:41.150366  593872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-060912' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-060912/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-060912' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 18:52:41.295129  593872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 18:52:41.295158  593872 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19679-586329/.minikube CaCertPath:/home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19679-586329/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19679-586329/.minikube}
	I0920 18:52:41.295190  593872 ubuntu.go:177] setting up certificates
	I0920 18:52:41.295203  593872 provision.go:84] configureAuth start
	I0920 18:52:41.295277  593872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-060912
	I0920 18:52:41.317921  593872 provision.go:143] copyHostCerts
	I0920 18:52:41.318013  593872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19679-586329/.minikube/ca.pem (1082 bytes)
	I0920 18:52:41.318141  593872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19679-586329/.minikube/cert.pem (1123 bytes)
	I0920 18:52:41.318206  593872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19679-586329/.minikube/key.pem (1679 bytes)
	I0920 18:52:41.318258  593872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19679-586329/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca-key.pem org=jenkins.addons-060912 san=[127.0.0.1 192.168.49.2 addons-060912 localhost minikube]
	I0920 18:52:42.112316  593872 provision.go:177] copyRemoteCerts
	I0920 18:52:42.112394  593872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 18:52:42.112441  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.134267  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:42.242047  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 18:52:42.271920  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 18:52:42.299774  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 18:52:42.328079  593872 provision.go:87] duration metric: took 1.032855668s to configureAuth
	I0920 18:52:42.328107  593872 ubuntu.go:193] setting minikube options for container-runtime
	I0920 18:52:42.328339  593872 config.go:182] Loaded profile config "addons-060912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:52:42.328485  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.347344  593872 main.go:141] libmachine: Using SSH client type: native
	I0920 18:52:42.347620  593872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 18:52:42.347642  593872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 18:52:42.592794  593872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 18:52:42.592858  593872 machine.go:96] duration metric: took 1.858825465s to provisionDockerMachine
	I0920 18:52:42.592883  593872 client.go:171] duration metric: took 9.939773855s to LocalClient.Create
	I0920 18:52:42.592928  593872 start.go:167] duration metric: took 9.939858146s to libmachine.API.Create "addons-060912"
	I0920 18:52:42.592956  593872 start.go:293] postStartSetup for "addons-060912" (driver="docker")
	I0920 18:52:42.592983  593872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 18:52:42.593088  593872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 18:52:42.593176  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.610673  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:42.712244  593872 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 18:52:42.715200  593872 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 18:52:42.715236  593872 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 18:52:42.715248  593872 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 18:52:42.715255  593872 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 18:52:42.715270  593872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-586329/.minikube/addons for local assets ...
	I0920 18:52:42.715339  593872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-586329/.minikube/files for local assets ...
	I0920 18:52:42.715362  593872 start.go:296] duration metric: took 122.386575ms for postStartSetup
	I0920 18:52:42.715678  593872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-060912
	I0920 18:52:42.734222  593872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/config.json ...
	I0920 18:52:42.734515  593872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:52:42.734561  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.751254  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:42.847551  593872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 18:52:42.851990  593872 start.go:128] duration metric: took 10.201875795s to createHost
	I0920 18:52:42.852014  593872 start.go:83] releasing machines lock for "addons-060912", held for 10.20206475s
	I0920 18:52:42.852104  593872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-060912
	I0920 18:52:42.869047  593872 ssh_runner.go:195] Run: cat /version.json
	I0920 18:52:42.869104  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.869386  593872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 18:52:42.869455  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:52:42.899611  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:42.901003  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:52:43.143986  593872 ssh_runner.go:195] Run: systemctl --version
	I0920 18:52:43.148494  593872 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 18:52:43.290058  593872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 18:52:43.294460  593872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:52:43.319067  593872 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0920 18:52:43.319189  593872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 18:52:43.355578  593872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 18:52:43.355601  593872 start.go:495] detecting cgroup driver to use...
	I0920 18:52:43.355665  593872 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 18:52:43.355740  593872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 18:52:43.372488  593872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 18:52:43.384584  593872 docker.go:217] disabling cri-docker service (if available) ...
	I0920 18:52:43.384660  593872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 18:52:43.398596  593872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 18:52:43.413969  593872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 18:52:43.506921  593872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 18:52:43.598933  593872 docker.go:233] disabling docker service ...
	I0920 18:52:43.599074  593872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 18:52:43.619211  593872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 18:52:43.632097  593872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 18:52:43.733486  593872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 18:52:43.832796  593872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 18:52:43.844479  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 18:52:43.861973  593872 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 18:52:43.862048  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.873308  593872 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 18:52:43.873384  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.884037  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.894744  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.905984  593872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 18:52:43.916341  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.926330  593872 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.942760  593872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 18:52:43.952451  593872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 18:52:43.961121  593872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 18:52:43.969336  593872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:52:44.051836  593872 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 18:52:44.177573  593872 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 18:52:44.177688  593872 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 18:52:44.181787  593872 start.go:563] Will wait 60s for crictl version
	I0920 18:52:44.181856  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:52:44.185690  593872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 18:52:44.231062  593872 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0920 18:52:44.231227  593872 ssh_runner.go:195] Run: crio --version
	I0920 18:52:44.269973  593872 ssh_runner.go:195] Run: crio --version
	I0920 18:52:44.310781  593872 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0920 18:52:44.313034  593872 cli_runner.go:164] Run: docker network inspect addons-060912 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 18:52:44.329327  593872 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 18:52:44.332861  593872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:52:44.343516  593872 kubeadm.go:883] updating cluster {Name:addons-060912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 18:52:44.343644  593872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:52:44.343708  593872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:52:44.419323  593872 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:52:44.419350  593872 crio.go:433] Images already preloaded, skipping extraction
	I0920 18:52:44.419407  593872 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 18:52:44.460038  593872 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 18:52:44.460063  593872 cache_images.go:84] Images are preloaded, skipping loading
	I0920 18:52:44.460072  593872 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0920 18:52:44.460202  593872 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-060912 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 18:52:44.460306  593872 ssh_runner.go:195] Run: crio config
	I0920 18:52:44.514388  593872 cni.go:84] Creating CNI manager for ""
	I0920 18:52:44.514413  593872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:52:44.514425  593872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 18:52:44.514455  593872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-060912 NodeName:addons-060912 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 18:52:44.514692  593872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-060912"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 18:52:44.514779  593872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 18:52:44.524006  593872 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 18:52:44.524086  593872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 18:52:44.532920  593872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0920 18:52:44.550839  593872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 18:52:44.569315  593872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0920 18:52:44.588095  593872 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 18:52:44.591834  593872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 18:52:44.603202  593872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:52:44.683106  593872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:52:44.698119  593872 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912 for IP: 192.168.49.2
	I0920 18:52:44.698180  593872 certs.go:194] generating shared ca certs ...
	I0920 18:52:44.698214  593872 certs.go:226] acquiring lock for ca certs: {Name:mk7eb18302258cdace745a9485ebacfefa55b617 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:44.698372  593872 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-586329/.minikube/ca.key
	I0920 18:52:45.773992  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/ca.crt ...
	I0920 18:52:45.774024  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/ca.crt: {Name:mk69bb3c03ec081974b98f7c83bdeca9a6b769c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:45.774223  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/ca.key ...
	I0920 18:52:45.774236  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/ca.key: {Name:mkb28aa16c08ff68a5c63f20cf7a4bc238a65fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:45.774329  593872 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.key
	I0920 18:52:46.306094  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.crt ...
	I0920 18:52:46.306172  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.crt: {Name:mk13a902be7ee771aaabf84d4d3b54c93512ec07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.306433  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.key ...
	I0920 18:52:46.306468  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.key: {Name:mk1a89b4cc2e765480e21d5ef942bf06a139d088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.307202  593872 certs.go:256] generating profile certs ...
	I0920 18:52:46.307348  593872 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.key
	I0920 18:52:46.307374  593872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt with IP's: []
	I0920 18:52:46.605180  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt ...
	I0920 18:52:46.605217  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: {Name:mk8ec6a9f7340d97847cfc91d6f9300f0c6bcb28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.605895  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.key ...
	I0920 18:52:46.605916  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.key: {Name:mk386836124c30368ae858b7208f9c6a723630c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.606065  593872 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key.2a5409c2
	I0920 18:52:46.606089  593872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt.2a5409c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 18:52:46.979328  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt.2a5409c2 ...
	I0920 18:52:46.979362  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt.2a5409c2: {Name:mk3de371d8cb695b97e343d91e61d450c7d1fceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.980031  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key.2a5409c2 ...
	I0920 18:52:46.980049  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key.2a5409c2: {Name:mk9c2eba1553b51025132aa06ce9c8b0e76efbd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:46.980539  593872 certs.go:381] copying /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt.2a5409c2 -> /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt
	I0920 18:52:46.980627  593872 certs.go:385] copying /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key.2a5409c2 -> /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key
	I0920 18:52:46.980686  593872 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.key
	I0920 18:52:46.980709  593872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.crt with IP's: []
	I0920 18:52:47.324830  593872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.crt ...
	I0920 18:52:47.324865  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.crt: {Name:mk4ae1dd5d3ae6c97cd47828e57b9a54fe850ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:47.325050  593872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.key ...
	I0920 18:52:47.325068  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.key: {Name:mkc15d867a2714a19ac6e38280d1d8789074dcb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:52:47.325295  593872 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 18:52:47.325345  593872 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/ca.pem (1082 bytes)
	I0920 18:52:47.325375  593872 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/cert.pem (1123 bytes)
	I0920 18:52:47.325407  593872 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-586329/.minikube/certs/key.pem (1679 bytes)
	I0920 18:52:47.326508  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 18:52:47.355471  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 18:52:47.380228  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 18:52:47.404994  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 18:52:47.431136  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 18:52:47.456460  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 18:52:47.482481  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 18:52:47.506787  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 18:52:47.530822  593872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19679-586329/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 18:52:47.555789  593872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 18:52:47.573794  593872 ssh_runner.go:195] Run: openssl version
	I0920 18:52:47.579677  593872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 18:52:47.589418  593872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:47.593050  593872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 18:52 /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:47.593170  593872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 18:52:47.600533  593872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 18:52:47.610126  593872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 18:52:47.613505  593872 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 18:52:47.613554  593872 kubeadm.go:392] StartCluster: {Name:addons-060912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-060912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:52:47.613633  593872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 18:52:47.613691  593872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 18:52:47.655005  593872 cri.go:89] found id: ""
	I0920 18:52:47.655106  593872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 18:52:47.664307  593872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 18:52:47.673271  593872 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 18:52:47.673378  593872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 18:52:47.682354  593872 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 18:52:47.682377  593872 kubeadm.go:157] found existing configuration files:
	
	I0920 18:52:47.682450  593872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 18:52:47.692197  593872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 18:52:47.692269  593872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 18:52:47.701005  593872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 18:52:47.709846  593872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 18:52:47.709939  593872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 18:52:47.718667  593872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 18:52:47.727606  593872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 18:52:47.727692  593872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 18:52:47.736256  593872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 18:52:47.745178  593872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 18:52:47.745277  593872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 18:52:47.753885  593872 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 18:52:47.794524  593872 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 18:52:47.794742  593872 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 18:52:47.830867  593872 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 18:52:47.831080  593872 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 18:52:47.831147  593872 kubeadm.go:310] OS: Linux
	I0920 18:52:47.831230  593872 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 18:52:47.831314  593872 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 18:52:47.831391  593872 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 18:52:47.831469  593872 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 18:52:47.831550  593872 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 18:52:47.831627  593872 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 18:52:47.831704  593872 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 18:52:47.831782  593872 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 18:52:47.831867  593872 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 18:52:47.892879  593872 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 18:52:47.893045  593872 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 18:52:47.893173  593872 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 18:52:47.900100  593872 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 18:52:47.904767  593872 out.go:235]   - Generating certificates and keys ...
	I0920 18:52:47.904883  593872 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 18:52:47.904967  593872 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 18:52:48.301483  593872 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 18:52:48.505712  593872 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 18:52:48.627729  593872 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 18:52:49.408566  593872 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 18:52:49.585470  593872 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 18:52:49.585855  593872 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-060912 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:52:50.403787  593872 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 18:52:50.404133  593872 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-060912 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 18:52:50.541148  593872 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 18:52:50.956925  593872 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 18:52:51.982371  593872 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 18:52:51.982653  593872 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 18:52:52.374506  593872 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 18:52:52.684664  593872 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 18:52:53.299054  593872 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 18:52:53.724444  593872 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 18:52:54.066667  593872 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 18:52:54.067475  593872 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 18:52:54.070541  593872 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 18:52:54.072885  593872 out.go:235]   - Booting up control plane ...
	I0920 18:52:54.072994  593872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 18:52:54.073071  593872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 18:52:54.073988  593872 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 18:52:54.087870  593872 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 18:52:54.094550  593872 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 18:52:54.094874  593872 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 18:52:54.193678  593872 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 18:52:54.193802  593872 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 18:52:55.195346  593872 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001638684s
	I0920 18:52:55.195439  593872 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 18:53:00.697120  593872 kubeadm.go:310] [api-check] The API server is healthy after 5.501870038s
	I0920 18:53:00.728997  593872 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 18:53:00.750818  593872 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 18:53:00.777564  593872 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 18:53:00.777765  593872 kubeadm.go:310] [mark-control-plane] Marking the node addons-060912 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 18:53:00.791626  593872 kubeadm.go:310] [bootstrap-token] Using token: 3mukj1.5gr6p80qxuq1esbm
	I0920 18:53:00.793695  593872 out.go:235]   - Configuring RBAC rules ...
	I0920 18:53:00.793825  593872 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 18:53:00.798878  593872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 18:53:00.806432  593872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 18:53:00.810066  593872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 18:53:00.815042  593872 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 18:53:00.818568  593872 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 18:53:01.105603  593872 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 18:53:01.548911  593872 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 18:53:02.106478  593872 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 18:53:02.106507  593872 kubeadm.go:310] 
	I0920 18:53:02.106578  593872 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 18:53:02.106584  593872 kubeadm.go:310] 
	I0920 18:53:02.106721  593872 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 18:53:02.106734  593872 kubeadm.go:310] 
	I0920 18:53:02.106772  593872 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 18:53:02.106834  593872 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 18:53:02.106884  593872 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 18:53:02.106888  593872 kubeadm.go:310] 
	I0920 18:53:02.106941  593872 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 18:53:02.106945  593872 kubeadm.go:310] 
	I0920 18:53:02.106992  593872 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 18:53:02.106997  593872 kubeadm.go:310] 
	I0920 18:53:02.107062  593872 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 18:53:02.107137  593872 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 18:53:02.107203  593872 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 18:53:02.107208  593872 kubeadm.go:310] 
	I0920 18:53:02.107290  593872 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 18:53:02.107368  593872 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 18:53:02.107373  593872 kubeadm.go:310] 
	I0920 18:53:02.107455  593872 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3mukj1.5gr6p80qxuq1esbm \
	I0920 18:53:02.107556  593872 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee5188aaaabb34e982a2e59e30a557aaa604ab6ab39002e0379fe9f0994613c \
	I0920 18:53:02.107576  593872 kubeadm.go:310] 	--control-plane 
	I0920 18:53:02.107579  593872 kubeadm.go:310] 
	I0920 18:53:02.107664  593872 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 18:53:02.107668  593872 kubeadm.go:310] 
	I0920 18:53:02.107748  593872 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3mukj1.5gr6p80qxuq1esbm \
	I0920 18:53:02.107850  593872 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eee5188aaaabb34e982a2e59e30a557aaa604ab6ab39002e0379fe9f0994613c 
	I0920 18:53:02.110386  593872 kubeadm.go:310] W0920 18:52:47.790907    1181 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:53:02.110692  593872 kubeadm.go:310] W0920 18:52:47.791995    1181 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 18:53:02.110919  593872 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0920 18:53:02.111098  593872 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 18:53:02.111123  593872 cni.go:84] Creating CNI manager for ""
	I0920 18:53:02.111136  593872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:53:02.113349  593872 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 18:53:02.115174  593872 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 18:53:02.119312  593872 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 18:53:02.119335  593872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 18:53:02.142105  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 18:53:02.431658  593872 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 18:53:02.431817  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:02.431901  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-060912 minikube.k8s.io/updated_at=2024_09_20T18_53_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=addons-060912 minikube.k8s.io/primary=true
	I0920 18:53:02.446105  593872 ops.go:34] apiserver oom_adj: -16
	I0920 18:53:02.565999  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:03.066570  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:03.566114  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:04.066703  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:04.566700  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:05.066202  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:05.566810  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:06.066185  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:06.566942  593872 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 18:53:06.656897  593872 kubeadm.go:1113] duration metric: took 4.225127214s to wait for elevateKubeSystemPrivileges
	I0920 18:53:06.656923  593872 kubeadm.go:394] duration metric: took 19.04337458s to StartCluster
	I0920 18:53:06.656941  593872 settings.go:142] acquiring lock: {Name:mk20a33ee294fe7ee1acfd59cbfa4fb0357cdddf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:53:06.657086  593872 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-586329/kubeconfig
	I0920 18:53:06.657504  593872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-586329/kubeconfig: {Name:mke1c46b803a8499b182d8427df0204efbd97826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 18:53:06.658369  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 18:53:06.658394  593872 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 18:53:06.658659  593872 config.go:182] Loaded profile config "addons-060912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:06.658701  593872 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 18:53:06.658785  593872 addons.go:69] Setting yakd=true in profile "addons-060912"
	I0920 18:53:06.658801  593872 addons.go:234] Setting addon yakd=true in "addons-060912"
	I0920 18:53:06.658825  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.659339  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.659588  593872 addons.go:69] Setting inspektor-gadget=true in profile "addons-060912"
	I0920 18:53:06.659613  593872 addons.go:234] Setting addon inspektor-gadget=true in "addons-060912"
	I0920 18:53:06.659639  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.660068  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.660634  593872 addons.go:69] Setting cloud-spanner=true in profile "addons-060912"
	I0920 18:53:06.660658  593872 addons.go:234] Setting addon cloud-spanner=true in "addons-060912"
	I0920 18:53:06.660694  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.661122  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.664101  593872 addons.go:69] Setting metrics-server=true in profile "addons-060912"
	I0920 18:53:06.664174  593872 addons.go:234] Setting addon metrics-server=true in "addons-060912"
	I0920 18:53:06.664225  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.664719  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.667132  593872 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-060912"
	I0920 18:53:06.667206  593872 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-060912"
	I0920 18:53:06.667241  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.667711  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.680289  593872 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-060912"
	I0920 18:53:06.680324  593872 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-060912"
	I0920 18:53:06.680367  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.680373  593872 addons.go:69] Setting default-storageclass=true in profile "addons-060912"
	I0920 18:53:06.680394  593872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-060912"
	I0920 18:53:06.680712  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.680844  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.691123  593872 addons.go:69] Setting registry=true in profile "addons-060912"
	I0920 18:53:06.691155  593872 addons.go:234] Setting addon registry=true in "addons-060912"
	I0920 18:53:06.691192  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.691227  593872 addons.go:69] Setting gcp-auth=true in profile "addons-060912"
	I0920 18:53:06.691250  593872 mustload.go:65] Loading cluster: addons-060912
	I0920 18:53:06.691423  593872 config.go:182] Loaded profile config "addons-060912": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 18:53:06.691663  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.691671  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.711088  593872 addons.go:69] Setting storage-provisioner=true in profile "addons-060912"
	I0920 18:53:06.711123  593872 addons.go:234] Setting addon storage-provisioner=true in "addons-060912"
	I0920 18:53:06.711160  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.711632  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.711881  593872 addons.go:69] Setting ingress=true in profile "addons-060912"
	I0920 18:53:06.711898  593872 addons.go:234] Setting addon ingress=true in "addons-060912"
	I0920 18:53:06.711935  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.712343  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.723158  593872 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-060912"
	I0920 18:53:06.723179  593872 addons.go:69] Setting ingress-dns=true in profile "addons-060912"
	I0920 18:53:06.723193  593872 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-060912"
	I0920 18:53:06.723201  593872 addons.go:234] Setting addon ingress-dns=true in "addons-060912"
	I0920 18:53:06.723253  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.723525  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.723678  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.728597  593872 addons.go:69] Setting volcano=true in profile "addons-060912"
	I0920 18:53:06.728639  593872 addons.go:234] Setting addon volcano=true in "addons-060912"
	I0920 18:53:06.728679  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.729150  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.748969  593872 out.go:177] * Verifying Kubernetes components...
	I0920 18:53:06.760944  593872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 18:53:06.763463  593872 addons.go:69] Setting volumesnapshots=true in profile "addons-060912"
	I0920 18:53:06.763497  593872 addons.go:234] Setting addon volumesnapshots=true in "addons-060912"
	I0920 18:53:06.763545  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.764046  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.800789  593872 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 18:53:06.802865  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 18:53:06.803037  593872 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 18:53:06.803166  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:06.821921  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 18:53:06.824580  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 18:53:06.824689  593872 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 18:53:06.827502  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 18:53:06.830370  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 18:53:06.832993  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 18:53:06.836463  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 18:53:06.889558  593872 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0920 18:53:06.890713  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 18:53:06.893342  593872 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 18:53:06.893363  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 18:53:06.893428  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:06.919558  593872 addons.go:234] Setting addon default-storageclass=true in "addons-060912"
	I0920 18:53:06.919598  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.923397  593872 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 18:53:06.924653  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.928129  593872 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-060912"
	I0920 18:53:06.936192  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:06.936680  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:06.949702  593872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:53:06.931219  593872 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:53:06.949911  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 18:53:06.949984  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:06.950158  593872 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 18:53:06.953013  593872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:53:06.950351  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 18:53:06.931395  593872 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 18:53:06.955338  593872 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 18:53:06.955415  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	W0920 18:53:06.950474  593872 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 18:53:06.962421  593872 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 18:53:06.962836  593872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:53:06.962853  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 18:53:06.962918  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:06.950358  593872 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 18:53:06.964337  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:06.979232  593872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 18:53:06.984485  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 18:53:06.979317  593872 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:53:07.001978  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 18:53:07.002102  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.019211  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:07.021298  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.038210  593872 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:53:07.038235  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 18:53:07.038313  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.038929  593872 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 18:53:07.039087  593872 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 18:53:07.039119  593872 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 18:53:07.039189  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.058038  593872 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 18:53:07.062112  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 18:53:07.062141  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 18:53:07.062211  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.064857  593872 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 18:53:07.070082  593872 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 18:53:07.070106  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 18:53:07.070177  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.078060  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 18:53:07.078085  593872 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 18:53:07.078158  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.095923  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.105890  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.131430  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.134781  593872 out.go:177]   - Using image docker.io/busybox:stable
	I0920 18:53:07.136896  593872 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 18:53:07.138991  593872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:53:07.139091  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 18:53:07.139160  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.171105  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.173369  593872 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 18:53:07.173394  593872 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 18:53:07.173464  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:07.203565  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.227872  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.240177  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.254474  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.256680  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.271504  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.296820  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:07.535433  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 18:53:07.535502  593872 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 18:53:07.580640  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 18:53:07.609002  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 18:53:07.636963  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 18:53:07.644507  593872 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 18:53:07.644575  593872 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 18:53:07.650358  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 18:53:07.748416  593872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 18:53:07.760493  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 18:53:07.760561  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 18:53:07.769526  593872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 18:53:07.769590  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 18:53:07.772733  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 18:53:07.772813  593872 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 18:53:07.776888  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 18:53:07.792143  593872 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 18:53:07.792217  593872 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 18:53:07.799482  593872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 18:53:07.799550  593872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 18:53:07.823809  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 18:53:07.841182  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 18:53:07.875970  593872 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 18:53:07.876048  593872 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 18:53:07.940871  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 18:53:07.940939  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 18:53:07.944185  593872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 18:53:07.944261  593872 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 18:53:07.969102  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 18:53:07.969176  593872 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 18:53:08.008762  593872 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:53:08.008849  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 18:53:08.024429  593872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 18:53:08.024500  593872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 18:53:08.101275  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 18:53:08.101369  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 18:53:08.104605  593872 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 18:53:08.104668  593872 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 18:53:08.125142  593872 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:53:08.125223  593872 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 18:53:08.153003  593872 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:53:08.153081  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 18:53:08.192290  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 18:53:08.213189  593872 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 18:53:08.213258  593872 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 18:53:08.237465  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 18:53:08.237541  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 18:53:08.266264  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 18:53:08.273091  593872 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 18:53:08.273160  593872 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 18:53:08.295064  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 18:53:08.337669  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 18:53:08.337737  593872 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 18:53:08.361168  593872 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 18:53:08.361244  593872 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 18:53:08.381395  593872 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 18:53:08.381462  593872 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 18:53:08.434137  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 18:53:08.434209  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 18:53:08.476175  593872 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 18:53:08.476243  593872 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 18:53:08.523767  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 18:53:08.523881  593872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 18:53:08.546597  593872 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:53:08.546686  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 18:53:08.570312  593872 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:53:08.570338  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 18:53:08.601983  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 18:53:08.602012  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 18:53:08.688455  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 18:53:08.767479  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:53:08.771253  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 18:53:08.771280  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 18:53:08.909646  593872 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:53:08.909674  593872 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 18:53:09.083120  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 18:53:09.366397  593872 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.475641839s)
	I0920 18:53:09.366431  593872 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 18:53:10.656771  593872 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-060912" context rescaled to 1 replicas
	I0920 18:53:12.612519  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.031804896s)
	I0920 18:53:12.612722  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.924224384s)
	I0920 18:53:12.612735  593872 addons.go:475] Verifying addon ingress=true in "addons-060912"
	I0920 18:53:12.612613  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.975577954s)
	I0920 18:53:12.612622  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.96220398s)
	I0920 18:53:12.612632  593872 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.864142489s)
	I0920 18:53:12.613848  593872 node_ready.go:35] waiting up to 6m0s for node "addons-060912" to be "Ready" ...
	I0920 18:53:12.612640  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.835693426s)
	I0920 18:53:12.612649  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.788768719s)
	I0920 18:53:12.612677  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.771429754s)
	I0920 18:53:12.612686  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.420319709s)
	I0920 18:53:12.614217  593872 addons.go:475] Verifying addon registry=true in "addons-060912"
	I0920 18:53:12.612700  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.346358677s)
	I0920 18:53:12.612710  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.317571088s)
	I0920 18:53:12.612603  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.003520156s)
	I0920 18:53:12.614629  593872 addons.go:475] Verifying addon metrics-server=true in "addons-060912"
	I0920 18:53:12.615210  593872 out.go:177] * Verifying ingress addon...
	I0920 18:53:12.616733  593872 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-060912 service yakd-dashboard -n yakd-dashboard
	
	I0920 18:53:12.616811  593872 out.go:177] * Verifying registry addon...
	I0920 18:53:12.618820  593872 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 18:53:12.621444  593872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 18:53:12.657275  593872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:53:12.657379  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:12.659859  593872 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 18:53:12.659933  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0920 18:53:12.681416  593872 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 18:53:12.767378  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.999849337s)
	W0920 18:53:12.767492  593872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:53:12.767543  593872 retry.go:31] will retry after 146.594076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 18:53:12.914870  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 18:53:13.020525  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.937354531s)
	I0920 18:53:13.020615  593872 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-060912"
	I0920 18:53:13.023641  593872 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 18:53:13.026354  593872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 18:53:13.059604  593872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:53:13.059630  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:13.157073  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:13.158584  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:13.530916  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:13.623039  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:13.625710  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:14.031109  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:14.132801  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:14.133257  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:14.536672  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:14.617333  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:14.625172  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:14.626190  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:15.037784  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:15.139083  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:15.140081  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:15.530807  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:15.625424  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:15.627060  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:15.811713  593872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.896729674s)
	I0920 18:53:16.030500  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:16.131340  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:16.132189  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:16.245904  593872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 18:53:16.246064  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:16.271250  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:16.400062  593872 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 18:53:16.420647  593872 addons.go:234] Setting addon gcp-auth=true in "addons-060912"
	I0920 18:53:16.420709  593872 host.go:66] Checking if "addons-060912" exists ...
	I0920 18:53:16.421221  593872 cli_runner.go:164] Run: docker container inspect addons-060912 --format={{.State.Status}}
	I0920 18:53:16.453077  593872 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 18:53:16.453133  593872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060912
	I0920 18:53:16.472122  593872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/addons-060912/id_rsa Username:docker}
	I0920 18:53:16.530200  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:16.594150  593872 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 18:53:16.595930  593872 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 18:53:16.598033  593872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 18:53:16.598096  593872 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 18:53:16.617576  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:16.623818  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:16.627395  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:16.654047  593872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 18:53:16.654122  593872 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 18:53:16.675521  593872 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:53:16.675615  593872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 18:53:16.696169  593872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 18:53:17.030750  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:17.123475  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:17.129949  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:17.398772  593872 addons.go:475] Verifying addon gcp-auth=true in "addons-060912"
	I0920 18:53:17.400714  593872 out.go:177] * Verifying gcp-auth addon...
	I0920 18:53:17.403254  593872 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 18:53:17.424327  593872 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 18:53:17.424348  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:17.530789  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:17.622216  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:17.624897  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:17.908276  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:18.032558  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:18.123585  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:18.125296  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:18.409068  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:18.535824  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:18.619988  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:18.631248  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:18.632258  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:18.906952  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:19.031275  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:19.123974  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:19.125329  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:19.407041  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:19.530437  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:19.623209  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:19.626778  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:19.907358  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:20.031410  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:20.124451  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:20.127885  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:20.408163  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:20.530304  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:20.624641  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:20.627521  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:20.640571  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:20.906590  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:21.030716  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:21.123396  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:21.125966  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:21.407311  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:21.530248  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:21.632768  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:21.633772  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:21.907461  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:22.030655  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:22.122491  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:22.124032  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:22.407518  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:22.529985  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:22.627170  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:22.627503  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:22.906531  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:23.030537  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:23.117862  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:23.122924  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:23.124460  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:23.406221  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:23.530623  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:23.622743  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:23.624225  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:23.906508  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:24.030964  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:24.123359  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:24.124718  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:24.406947  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:24.530397  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:24.622656  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:24.625089  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:24.906346  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:25.030863  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:25.123281  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:25.124762  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:25.406740  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:25.529934  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:25.618285  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:25.623223  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:25.625275  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:25.907343  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:26.029874  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:26.122880  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:26.125558  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:26.406428  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:26.529876  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:26.622569  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:26.624263  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:26.907876  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:27.030892  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:27.122763  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:27.125752  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:27.407385  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:27.529890  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:27.623664  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:27.625235  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:27.906932  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:28.031546  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:28.117142  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:28.124003  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:28.125268  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:28.407350  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:28.530316  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:28.622496  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:28.625367  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:28.906550  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:29.030563  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:29.123518  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:29.125027  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:29.406272  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:29.530318  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:29.623487  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:29.626002  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:29.907054  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:30.034955  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:30.117886  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:30.123770  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:30.130136  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:30.406799  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:30.530068  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:30.623431  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:30.625815  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:30.906721  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:31.030766  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:31.122470  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:31.125131  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:31.406466  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:31.530019  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:31.623357  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:31.625132  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:31.906429  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:32.030130  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:32.122724  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:32.125475  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:32.406950  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:32.530073  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:32.617262  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:32.623567  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:32.624606  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:32.906774  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:33.030618  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:33.122463  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:33.124352  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:33.406566  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:33.529976  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:33.623194  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:33.625569  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:33.906893  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:34.030568  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:34.124214  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:34.125664  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:34.406789  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:34.530093  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:34.617503  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:34.622267  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:34.624640  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:34.906716  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:35.030814  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:35.122711  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:35.124484  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:35.406906  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:35.530090  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:35.628989  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:35.643448  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:35.907749  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:36.033222  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:36.123115  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:36.125074  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:36.406478  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:36.530372  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:36.617537  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:36.623124  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:36.624707  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:36.907705  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:37.032609  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:37.122402  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:37.124436  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:37.410158  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:37.530964  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:37.623290  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:37.624638  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:37.908427  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:38.032501  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:38.123432  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:38.125097  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:38.407006  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:38.531090  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:38.617637  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:38.623531  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:38.624757  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:38.907994  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:39.030429  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:39.122831  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:39.125472  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:39.406900  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:39.530683  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:39.622682  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:39.625408  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:39.906436  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:40.032512  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:40.122833  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:40.125873  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:40.407481  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:40.530433  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:40.623305  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:40.625601  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:40.907104  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:41.030489  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:41.117535  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:41.123596  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:41.125740  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:41.408742  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:41.530414  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:41.623195  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:41.624942  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:41.906278  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:42.030219  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:42.124663  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:42.126897  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:42.406451  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:42.529842  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:42.623685  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:42.624861  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:42.907530  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:43.030270  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:43.118084  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:43.122827  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:43.124122  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:43.406531  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:43.530126  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:43.623043  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:43.624496  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:43.906195  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:44.030547  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:44.123616  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:44.124870  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:44.407296  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:44.530337  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:44.623165  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:44.624362  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:44.906714  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:45.030883  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:45.127738  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:45.130246  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:45.407317  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:45.530858  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:45.617283  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:45.623572  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:45.626121  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:45.907349  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:46.029896  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:46.122967  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:46.124510  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:46.406878  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:46.529797  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:46.623467  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:46.625901  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:46.907092  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:47.029591  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:47.123043  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:47.124598  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:47.407203  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:47.530170  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:47.617840  593872 node_ready.go:53] node "addons-060912" has status "Ready":"False"
	I0920 18:53:47.622988  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:47.625052  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:47.906284  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:48.030282  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:48.123543  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:48.125837  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:48.407567  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:48.530061  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:48.622624  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:48.624101  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:48.906319  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:49.029685  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:49.123390  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:49.125759  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:49.406675  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:49.530424  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:49.623059  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:49.624375  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:49.914789  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:50.073315  593872 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 18:53:50.073346  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:50.177542  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:50.178097  593872 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 18:53:50.178119  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:50.178911  593872 node_ready.go:49] node "addons-060912" has status "Ready":"True"
	I0920 18:53:50.178932  593872 node_ready.go:38] duration metric: took 37.565064524s for node "addons-060912" to be "Ready" ...
	I0920 18:53:50.178943  593872 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:53:50.209529  593872 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cl27s" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:50.412871  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:50.534356  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:50.633995  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:50.635103  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:50.926509  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:51.040298  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:51.123773  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:51.127158  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:51.407040  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:51.532755  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:51.632804  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:51.634087  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:51.932590  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:52.032350  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:52.124423  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:52.129016  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:52.216901  593872 pod_ready.go:93] pod "coredns-7c65d6cfc9-cl27s" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.216927  593872 pod_ready.go:82] duration metric: took 2.007357992s for pod "coredns-7c65d6cfc9-cl27s" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.216954  593872 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.227598  593872 pod_ready.go:93] pod "etcd-addons-060912" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.227626  593872 pod_ready.go:82] duration metric: took 10.663807ms for pod "etcd-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.227642  593872 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.233476  593872 pod_ready.go:93] pod "kube-apiserver-addons-060912" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.233503  593872 pod_ready.go:82] duration metric: took 5.853067ms for pod "kube-apiserver-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.233518  593872 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.239607  593872 pod_ready.go:93] pod "kube-controller-manager-addons-060912" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.239631  593872 pod_ready.go:82] duration metric: took 6.104882ms for pod "kube-controller-manager-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.239646  593872 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c522g" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.245402  593872 pod_ready.go:93] pod "kube-proxy-c522g" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.245429  593872 pod_ready.go:82] duration metric: took 5.77497ms for pod "kube-proxy-c522g" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.245442  593872 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.407590  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:52.532029  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:52.614340  593872 pod_ready.go:93] pod "kube-scheduler-addons-060912" in "kube-system" namespace has status "Ready":"True"
	I0920 18:53:52.614364  593872 pod_ready.go:82] duration metric: took 368.914093ms for pod "kube-scheduler-addons-060912" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.614376  593872 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace to be "Ready" ...
	I0920 18:53:52.628872  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:52.630785  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:52.907684  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:53.032348  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:53.123194  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:53.125921  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:53.407116  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:53.531905  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:53.632352  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:53.633355  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:53.907223  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:54.031444  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:54.123369  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:54.125452  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:54.406161  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:54.531797  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:54.621522  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:53:54.625378  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:54.626368  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:54.908405  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:55.033311  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:55.129235  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:55.136020  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:55.407641  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:55.532754  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:55.629988  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:55.630688  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:55.908033  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:56.032956  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:56.131881  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:56.135239  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:56.407252  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:56.532357  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:56.634496  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:56.634762  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:56.675444  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:53:56.907632  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:57.032216  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:57.124438  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:57.129823  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:57.407619  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:57.531487  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:57.629839  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:57.629800  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:57.908525  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:58.032421  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:58.127448  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:58.128931  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:58.406845  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:58.532378  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:58.629947  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:58.637452  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:58.907318  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:59.038238  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:59.123250  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:53:59.123692  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:59.124754  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:59.407399  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:53:59.531831  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:53:59.625264  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:53:59.627267  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:53:59.907619  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:00.040374  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:00.143816  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:00.162956  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:00.414806  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:00.535472  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:00.638003  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:00.654461  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:00.906727  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:01.033760  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:01.122760  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:01.127364  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:01.407190  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:01.531912  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:01.620649  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:01.623645  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:01.625824  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:01.907353  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:02.031678  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:02.127559  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:02.136258  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:02.407697  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:02.532649  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:02.624581  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:02.626864  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:02.907243  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:03.031855  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:03.124370  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:03.125997  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:03.406572  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:03.531774  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:03.622198  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:03.623803  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:03.626096  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:03.906862  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:04.034339  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:04.123394  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:04.125057  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:04.406456  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:04.531188  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:04.624236  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:04.625251  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:04.907437  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:05.034015  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:05.136328  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:05.140535  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:05.407750  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:05.531688  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:05.622928  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:05.625977  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:05.628216  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:05.907392  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:06.035534  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:06.137035  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:06.140689  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:06.407645  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:06.532556  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:06.631129  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:06.637720  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:06.907369  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:07.033152  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:07.131047  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:07.132304  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:07.407831  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:07.534227  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:07.628560  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:07.630035  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:07.908146  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:08.046766  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:08.128098  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:08.146618  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:08.148909  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:08.406526  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:08.531145  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:08.622943  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:08.625835  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:08.907156  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:09.032047  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:09.125239  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:09.127763  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:09.406510  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:09.535708  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:09.627079  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:09.628475  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:09.908539  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:10.032103  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:10.128287  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:10.130963  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:10.134006  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:10.408230  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:10.537067  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:10.635350  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:10.636775  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:10.909280  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:11.031922  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:11.141473  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:11.143400  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:11.409121  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:11.533393  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:11.624605  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:11.626270  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:11.908376  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:12.033643  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:12.134194  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:12.135720  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:12.139063  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:12.408488  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:12.533149  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:12.625412  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:12.628837  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:12.908197  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:13.039091  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:13.148688  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:13.150626  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:13.407231  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:13.538129  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:13.633910  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:13.634284  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:13.907146  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:14.031963  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:14.138068  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:14.139837  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:14.406746  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:14.532196  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:14.621320  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:14.623936  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:14.625755  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:14.915462  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:15.039044  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:15.151579  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:15.154892  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:15.407061  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:15.532461  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:15.623936  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:15.631335  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:15.907583  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:16.031676  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:16.132063  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:16.132159  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:16.407214  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:16.531877  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:16.622218  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:16.624731  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:16.627246  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:16.907112  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:17.031940  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:17.123879  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:17.125637  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:17.407684  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:17.531652  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:17.623237  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:17.624952  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:17.907148  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:18.032472  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:18.124997  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:18.128424  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:18.408608  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:18.533769  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:18.622613  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:18.625361  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:18.626956  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:18.907688  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:19.032365  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:19.126642  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:19.128047  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:19.407217  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:19.532128  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:19.625119  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:19.637814  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:19.908672  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:20.032425  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:20.134537  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:20.138948  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:20.407812  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:20.531792  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:20.623610  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:20.625524  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:20.907557  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:21.032364  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:21.122717  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:21.125484  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:21.128074  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:21.408400  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:21.532268  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:21.626506  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:21.627950  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:21.907994  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:22.032143  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:22.125563  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:22.128530  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:22.407181  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:22.531457  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:22.627992  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:22.630782  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:22.908478  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:23.033385  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:23.150200  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:23.158628  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:23.175368  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:23.407473  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:23.562996  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:23.633408  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:23.649343  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:23.908349  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:24.051429  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:24.130915  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:24.133182  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:24.407528  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:24.534113  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:24.624993  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:24.625264  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:24.906353  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:25.031654  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:25.125434  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:25.125905  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:25.407969  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:25.532605  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:25.630968  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:25.632259  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:25.636454  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:25.907689  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:26.036437  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:26.130115  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:26.132311  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:26.407831  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:26.532065  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:26.632353  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:26.635310  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:26.907144  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:27.031562  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:27.126473  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:27.129258  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:27.407457  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:27.534341  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:27.628767  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:27.630758  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:27.906634  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:28.032860  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:28.133735  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:28.135066  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:28.139879  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:28.407295  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:28.530907  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:28.623822  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:28.625241  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:28.908410  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:29.032110  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:29.123334  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:29.125825  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:29.408334  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:29.531931  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:29.635718  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:29.637010  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:29.907747  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:30.032207  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:30.125423  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:30.129378  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:30.429404  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:30.531075  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:30.623099  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:30.623506  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:30.626472  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:30.907454  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:31.031130  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:31.122882  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:31.125933  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:31.409131  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:31.536612  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:31.637346  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:31.637939  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:31.909737  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:32.033045  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:32.124243  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:32.125947  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:32.415304  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:32.531436  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:32.623785  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:32.628047  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:32.906594  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:33.032489  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:33.121437  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:33.123577  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:33.126223  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:33.407727  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:33.544797  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:33.639503  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:33.639917  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:33.908501  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:34.042595  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:34.129639  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:34.143896  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:34.408132  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:34.532017  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:34.625507  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:34.625737  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:34.908204  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:35.031539  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:35.122869  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:35.125949  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:35.407190  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:35.531204  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:35.621395  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:35.622954  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:35.627663  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:35.907161  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:36.031668  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:36.123711  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:36.125683  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:36.406719  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:36.531817  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:36.624197  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:36.625728  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:36.906856  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:37.039202  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:37.132125  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:37.139533  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:37.407783  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:37.532343  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:37.627594  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:37.629986  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:37.635302  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:37.907649  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:38.036353  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:38.129070  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:38.141087  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:38.406532  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:38.532632  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:38.629637  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:38.631244  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:38.907315  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:39.032138  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:39.144645  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:39.146049  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:39.410944  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:39.532413  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:39.626005  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:39.634918  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:39.907693  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:40.048331  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:40.135999  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:40.145681  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:40.147455  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:40.420292  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:40.532803  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:40.632950  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:40.633952  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:40.907773  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:41.031222  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:41.124458  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:41.126401  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 18:54:41.407738  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:41.541065  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:41.627392  593872 kapi.go:107] duration metric: took 1m29.005936692s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 18:54:41.627849  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:41.907865  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:42.031882  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:42.128072  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:42.136230  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:42.408023  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:42.535450  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:42.628719  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:42.907633  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:43.037631  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:43.126577  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:43.408257  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:43.532830  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:43.622674  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:43.906383  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:44.032885  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:44.130566  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:44.412103  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:44.531674  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:44.624493  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:44.625354  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:44.907163  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:45.041059  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:45.146171  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:45.409090  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:45.538109  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:45.625064  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:45.906825  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:46.032091  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:46.126749  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:46.408000  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:46.532548  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:46.625356  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:46.906911  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:47.032115  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:47.126363  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:47.126613  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:47.408116  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:47.537259  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:47.631788  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:47.906519  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:48.032902  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:48.124487  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:48.407049  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:48.531900  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:48.623826  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:48.907268  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:49.032168  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:49.124770  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:49.407794  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:49.532588  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:49.621573  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:49.624122  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:49.907138  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:50.031140  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:50.125351  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:50.407077  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:50.531766  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:50.623649  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:50.906983  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:51.031224  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:51.133790  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:51.407000  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:51.532440  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:51.621696  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:51.629042  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:51.910023  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:52.034638  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:52.131483  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:52.407512  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:52.531145  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 18:54:52.624175  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:52.906583  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:53.031793  593872 kapi.go:107] duration metric: took 1m40.005442028s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 18:54:53.123528  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:53.407310  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:53.621853  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:53.624084  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:53.907521  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:54.125743  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:54.406985  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:54.624565  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:54.907009  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:55.123603  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:55.414564  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:55.633000  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:55.634997  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:55.907186  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:56.129208  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:56.409356  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:56.626668  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:56.907820  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:57.127443  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:57.407400  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:57.633052  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:57.636316  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:54:57.907158  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:58.124639  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:58.408153  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:58.628065  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:58.906454  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:59.137305  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:59.409662  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:54:59.625505  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:54:59.908145  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:00.234020  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:55:00.240334  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:00.412169  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:00.638990  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:55:00.907738  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:01.137609  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:55:01.408120  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:01.625029  593872 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 18:55:01.908497  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:02.123863  593872 kapi.go:107] duration metric: took 1m49.505042131s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 18:55:02.407797  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:02.620522  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:02.908784  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:03.409791  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:03.906980  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:04.408196  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:04.628352  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:04.908939  593872 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 18:55:05.407593  593872 kapi.go:107] duration metric: took 1m48.004337441s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 18:55:05.410378  593872 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-060912 cluster.
	I0920 18:55:05.412210  593872 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 18:55:05.414346  593872 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 18:55:05.416700  593872 out.go:177] * Enabled addons: inspektor-gadget, cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 18:55:05.419147  593872 addons.go:510] duration metric: took 1m58.760444537s for enable addons: enabled=[inspektor-gadget cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 18:55:07.120857  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:09.122235  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:11.123435  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:13.620855  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:15.621324  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:18.121934  593872 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"False"
	I0920 18:55:20.620894  593872 pod_ready.go:93] pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace has status "Ready":"True"
	I0920 18:55:20.620923  593872 pod_ready.go:82] duration metric: took 1m28.006539781s for pod "metrics-server-84c5f94fbc-6n52n" in "kube-system" namespace to be "Ready" ...
	I0920 18:55:20.620936  593872 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6c4pc" in "kube-system" namespace to be "Ready" ...
	I0920 18:55:20.626791  593872 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6c4pc" in "kube-system" namespace has status "Ready":"True"
	I0920 18:55:20.626827  593872 pod_ready.go:82] duration metric: took 5.883525ms for pod "nvidia-device-plugin-daemonset-6c4pc" in "kube-system" namespace to be "Ready" ...
	I0920 18:55:20.626855  593872 pod_ready.go:39] duration metric: took 1m30.447894207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 18:55:20.626873  593872 api_server.go:52] waiting for apiserver process to appear ...
	I0920 18:55:20.626917  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:55:20.627002  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:55:20.683602  593872 cri.go:89] found id: "8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:20.683673  593872 cri.go:89] found id: ""
	I0920 18:55:20.683688  593872 logs.go:276] 1 containers: [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef]
	I0920 18:55:20.683760  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.687980  593872 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:55:20.688058  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:55:20.725151  593872 cri.go:89] found id: "ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:20.725197  593872 cri.go:89] found id: ""
	I0920 18:55:20.725206  593872 logs.go:276] 1 containers: [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395]
	I0920 18:55:20.725263  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.728863  593872 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:55:20.728936  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:55:20.768741  593872 cri.go:89] found id: "1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:20.768764  593872 cri.go:89] found id: ""
	I0920 18:55:20.768772  593872 logs.go:276] 1 containers: [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f]
	I0920 18:55:20.768830  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.773058  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:55:20.773130  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:55:20.811084  593872 cri.go:89] found id: "0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:20.811108  593872 cri.go:89] found id: ""
	I0920 18:55:20.811117  593872 logs.go:276] 1 containers: [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f]
	I0920 18:55:20.811173  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.814706  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:55:20.814779  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:55:20.856300  593872 cri.go:89] found id: "6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:20.856326  593872 cri.go:89] found id: ""
	I0920 18:55:20.856334  593872 logs.go:276] 1 containers: [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5]
	I0920 18:55:20.856389  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.860484  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:55:20.860560  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:55:20.902306  593872 cri.go:89] found id: "4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:20.902329  593872 cri.go:89] found id: ""
	I0920 18:55:20.902347  593872 logs.go:276] 1 containers: [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831]
	I0920 18:55:20.902405  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.905966  593872 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:55:20.906048  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:55:20.949793  593872 cri.go:89] found id: "b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:20.949815  593872 cri.go:89] found id: ""
	I0920 18:55:20.949823  593872 logs.go:276] 1 containers: [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b]
	I0920 18:55:20.949881  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:20.953468  593872 logs.go:123] Gathering logs for dmesg ...
	I0920 18:55:20.953498  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:55:20.971004  593872 logs.go:123] Gathering logs for kube-apiserver [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef] ...
	I0920 18:55:20.971114  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:21.056388  593872 logs.go:123] Gathering logs for etcd [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395] ...
	I0920 18:55:21.056425  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:21.104981  593872 logs.go:123] Gathering logs for coredns [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f] ...
	I0920 18:55:21.105015  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:21.151277  593872 logs.go:123] Gathering logs for kube-controller-manager [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831] ...
	I0920 18:55:21.151308  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:21.229700  593872 logs.go:123] Gathering logs for kindnet [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b] ...
	I0920 18:55:21.229738  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:21.276985  593872 logs.go:123] Gathering logs for kubelet ...
	I0920 18:55:21.277013  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:55:21.366118  593872 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:55:21.366161  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:55:21.585779  593872 logs.go:123] Gathering logs for kube-scheduler [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f] ...
	I0920 18:55:21.585813  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:21.630226  593872 logs.go:123] Gathering logs for kube-proxy [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5] ...
	I0920 18:55:21.630253  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:21.675630  593872 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:55:21.675658  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:55:21.774311  593872 logs.go:123] Gathering logs for container status ...
	I0920 18:55:21.774353  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:55:24.342050  593872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:55:24.356377  593872 api_server.go:72] duration metric: took 2m17.697948817s to wait for apiserver process to appear ...
	I0920 18:55:24.356407  593872 api_server.go:88] waiting for apiserver healthz status ...
	I0920 18:55:24.356442  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:55:24.356512  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:55:24.396349  593872 cri.go:89] found id: "8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:24.396374  593872 cri.go:89] found id: ""
	I0920 18:55:24.396383  593872 logs.go:276] 1 containers: [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef]
	I0920 18:55:24.396440  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.400025  593872 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:55:24.400103  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:55:24.437632  593872 cri.go:89] found id: "ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:24.437656  593872 cri.go:89] found id: ""
	I0920 18:55:24.437665  593872 logs.go:276] 1 containers: [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395]
	I0920 18:55:24.437765  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.441226  593872 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:55:24.441310  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:55:24.480492  593872 cri.go:89] found id: "1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:24.480515  593872 cri.go:89] found id: ""
	I0920 18:55:24.480523  593872 logs.go:276] 1 containers: [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f]
	I0920 18:55:24.480588  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.484432  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:55:24.484514  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:55:24.534785  593872 cri.go:89] found id: "0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:24.534810  593872 cri.go:89] found id: ""
	I0920 18:55:24.534819  593872 logs.go:276] 1 containers: [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f]
	I0920 18:55:24.534880  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.538697  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:55:24.538963  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:55:24.588756  593872 cri.go:89] found id: "6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:24.588780  593872 cri.go:89] found id: ""
	I0920 18:55:24.588789  593872 logs.go:276] 1 containers: [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5]
	I0920 18:55:24.588877  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.592738  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:55:24.592830  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:55:24.634956  593872 cri.go:89] found id: "4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:24.634979  593872 cri.go:89] found id: ""
	I0920 18:55:24.634987  593872 logs.go:276] 1 containers: [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831]
	I0920 18:55:24.635066  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.638509  593872 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:55:24.638580  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:55:24.682689  593872 cri.go:89] found id: "b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:24.682712  593872 cri.go:89] found id: ""
	I0920 18:55:24.682720  593872 logs.go:276] 1 containers: [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b]
	I0920 18:55:24.682778  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:24.686419  593872 logs.go:123] Gathering logs for kube-controller-manager [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831] ...
	I0920 18:55:24.686490  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:24.769481  593872 logs.go:123] Gathering logs for kube-apiserver [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef] ...
	I0920 18:55:24.769516  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:24.824413  593872 logs.go:123] Gathering logs for etcd [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395] ...
	I0920 18:55:24.824464  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:24.873507  593872 logs.go:123] Gathering logs for coredns [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f] ...
	I0920 18:55:24.873540  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:24.928565  593872 logs.go:123] Gathering logs for kube-scheduler [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f] ...
	I0920 18:55:24.928603  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:24.972207  593872 logs.go:123] Gathering logs for kube-proxy [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5] ...
	I0920 18:55:24.972240  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:25.034067  593872 logs.go:123] Gathering logs for container status ...
	I0920 18:55:25.034101  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:55:25.088479  593872 logs.go:123] Gathering logs for kubelet ...
	I0920 18:55:25.088515  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:55:25.180642  593872 logs.go:123] Gathering logs for dmesg ...
	I0920 18:55:25.180679  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:55:25.197983  593872 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:55:25.198018  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:55:25.348415  593872 logs.go:123] Gathering logs for kindnet [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b] ...
	I0920 18:55:25.348488  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:25.396676  593872 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:55:25.396702  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:55:27.999369  593872 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 18:55:28.011064  593872 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 18:55:28.012493  593872 api_server.go:141] control plane version: v1.31.1
	I0920 18:55:28.012529  593872 api_server.go:131] duration metric: took 3.656113679s to wait for apiserver health ...
	I0920 18:55:28.012540  593872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 18:55:28.012573  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 18:55:28.012671  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 18:55:28.054623  593872 cri.go:89] found id: "8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:28.054647  593872 cri.go:89] found id: ""
	I0920 18:55:28.054656  593872 logs.go:276] 1 containers: [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef]
	I0920 18:55:28.054716  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.058765  593872 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 18:55:28.058859  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 18:55:28.103813  593872 cri.go:89] found id: "ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:28.103835  593872 cri.go:89] found id: ""
	I0920 18:55:28.103843  593872 logs.go:276] 1 containers: [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395]
	I0920 18:55:28.103902  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.107830  593872 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 18:55:28.107903  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 18:55:28.156157  593872 cri.go:89] found id: "1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:28.156183  593872 cri.go:89] found id: ""
	I0920 18:55:28.156191  593872 logs.go:276] 1 containers: [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f]
	I0920 18:55:28.156248  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.160447  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 18:55:28.160566  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 18:55:28.201058  593872 cri.go:89] found id: "0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:28.201081  593872 cri.go:89] found id: ""
	I0920 18:55:28.201089  593872 logs.go:276] 1 containers: [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f]
	I0920 18:55:28.201166  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.204832  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 18:55:28.204932  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 18:55:28.243472  593872 cri.go:89] found id: "6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:28.243506  593872 cri.go:89] found id: ""
	I0920 18:55:28.243516  593872 logs.go:276] 1 containers: [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5]
	I0920 18:55:28.243582  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.247662  593872 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 18:55:28.247823  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 18:55:28.294254  593872 cri.go:89] found id: "4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:28.294288  593872 cri.go:89] found id: ""
	I0920 18:55:28.294297  593872 logs.go:276] 1 containers: [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831]
	I0920 18:55:28.294369  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.297872  593872 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 18:55:28.297956  593872 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 18:55:28.336421  593872 cri.go:89] found id: "b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:28.336456  593872 cri.go:89] found id: ""
	I0920 18:55:28.336465  593872 logs.go:276] 1 containers: [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b]
	I0920 18:55:28.336532  593872 ssh_runner.go:195] Run: which crictl
	I0920 18:55:28.340282  593872 logs.go:123] Gathering logs for kube-controller-manager [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831] ...
	I0920 18:55:28.340356  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831"
	I0920 18:55:28.412211  593872 logs.go:123] Gathering logs for kindnet [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b] ...
	I0920 18:55:28.412251  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b"
	I0920 18:55:28.460209  593872 logs.go:123] Gathering logs for container status ...
	I0920 18:55:28.460238  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 18:55:28.511508  593872 logs.go:123] Gathering logs for kubelet ...
	I0920 18:55:28.511544  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 18:55:28.604612  593872 logs.go:123] Gathering logs for etcd [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395] ...
	I0920 18:55:28.604650  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395"
	I0920 18:55:28.654841  593872 logs.go:123] Gathering logs for coredns [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f] ...
	I0920 18:55:28.654872  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f"
	I0920 18:55:28.695824  593872 logs.go:123] Gathering logs for kube-scheduler [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f] ...
	I0920 18:55:28.695854  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f"
	I0920 18:55:28.738546  593872 logs.go:123] Gathering logs for kube-proxy [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5] ...
	I0920 18:55:28.738579  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5"
	I0920 18:55:28.778897  593872 logs.go:123] Gathering logs for CRI-O ...
	I0920 18:55:28.778928  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 18:55:28.872309  593872 logs.go:123] Gathering logs for dmesg ...
	I0920 18:55:28.872347  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 18:55:28.889387  593872 logs.go:123] Gathering logs for describe nodes ...
	I0920 18:55:28.889419  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 18:55:29.037307  593872 logs.go:123] Gathering logs for kube-apiserver [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef] ...
	I0920 18:55:29.037336  593872 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef"
	I0920 18:55:31.614602  593872 system_pods.go:59] 18 kube-system pods found
	I0920 18:55:31.614646  593872 system_pods.go:61] "coredns-7c65d6cfc9-cl27s" [04689caf-fd31-41a8-b632-da305d969b77] Running
	I0920 18:55:31.614653  593872 system_pods.go:61] "csi-hostpath-attacher-0" [688a011d-4561-4c00-844b-6aa7f297a0aa] Running
	I0920 18:55:31.614658  593872 system_pods.go:61] "csi-hostpath-resizer-0" [106e8af5-f95f-436e-9fab-304f7ea18617] Running
	I0920 18:55:31.614663  593872 system_pods.go:61] "csi-hostpathplugin-7jhqn" [6803e01f-d3a5-4fe1-b76c-a936b8eb8a69] Running
	I0920 18:55:31.614667  593872 system_pods.go:61] "etcd-addons-060912" [f2728dff-aab5-4b32-bf02-93f8d2b5a6c1] Running
	I0920 18:55:31.614671  593872 system_pods.go:61] "kindnet-tl865" [9c700cfd-066f-47c6-aade-257d64dd87fd] Running
	I0920 18:55:31.614675  593872 system_pods.go:61] "kube-apiserver-addons-060912" [af9cd9b5-fbf4-4bb2-b6b8-58e119cc2e54] Running
	I0920 18:55:31.614679  593872 system_pods.go:61] "kube-controller-manager-addons-060912" [e2b17a09-a56a-42f3-885f-853c02ecc200] Running
	I0920 18:55:31.614683  593872 system_pods.go:61] "kube-ingress-dns-minikube" [1b76bbee-eac5-4d2e-b598-514d3650c987] Running
	I0920 18:55:31.614687  593872 system_pods.go:61] "kube-proxy-c522g" [3a56e42d-23c2-4774-b82c-3c6b2daa3a1f] Running
	I0920 18:55:31.614691  593872 system_pods.go:61] "kube-scheduler-addons-060912" [a6533c75-ea94-4da5-bb5e-7a23d9d92d69] Running
	I0920 18:55:31.614697  593872 system_pods.go:61] "metrics-server-84c5f94fbc-6n52n" [707188cc-7e99-491b-b510-82f0f9320fee] Running
	I0920 18:55:31.614703  593872 system_pods.go:61] "nvidia-device-plugin-daemonset-6c4pc" [70208489-2144-41c7-b72c-895d0344ccd9] Running
	I0920 18:55:31.614706  593872 system_pods.go:61] "registry-66c9cd494c-w8gt6" [ded46fe6-d8da-4546-81fd-d1f1949dcadb] Running
	I0920 18:55:31.614710  593872 system_pods.go:61] "registry-proxy-8ghgp" [5a98470b-31f7-4f1c-9586-f681f375453b] Running
	I0920 18:55:31.614714  593872 system_pods.go:61] "snapshot-controller-56fcc65765-r8g9v" [b22e42d4-0119-4486-b078-a8a3532a14c2] Running
	I0920 18:55:31.614717  593872 system_pods.go:61] "snapshot-controller-56fcc65765-wp8r8" [0aa17fbb-ebc2-41dc-8a5a-de69a6f62b73] Running
	I0920 18:55:31.614725  593872 system_pods.go:61] "storage-provisioner" [76adfe52-d569-4e95-82f8-414bc1dcbc24] Running
	I0920 18:55:31.614731  593872 system_pods.go:74] duration metric: took 3.602185872s to wait for pod list to return data ...
	I0920 18:55:31.614744  593872 default_sa.go:34] waiting for default service account to be created ...
	I0920 18:55:31.617429  593872 default_sa.go:45] found service account: "default"
	I0920 18:55:31.617456  593872 default_sa.go:55] duration metric: took 2.706624ms for default service account to be created ...
	I0920 18:55:31.617465  593872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 18:55:31.627751  593872 system_pods.go:86] 18 kube-system pods found
	I0920 18:55:31.627789  593872 system_pods.go:89] "coredns-7c65d6cfc9-cl27s" [04689caf-fd31-41a8-b632-da305d969b77] Running
	I0920 18:55:31.627797  593872 system_pods.go:89] "csi-hostpath-attacher-0" [688a011d-4561-4c00-844b-6aa7f297a0aa] Running
	I0920 18:55:31.627803  593872 system_pods.go:89] "csi-hostpath-resizer-0" [106e8af5-f95f-436e-9fab-304f7ea18617] Running
	I0920 18:55:31.627808  593872 system_pods.go:89] "csi-hostpathplugin-7jhqn" [6803e01f-d3a5-4fe1-b76c-a936b8eb8a69] Running
	I0920 18:55:31.627813  593872 system_pods.go:89] "etcd-addons-060912" [f2728dff-aab5-4b32-bf02-93f8d2b5a6c1] Running
	I0920 18:55:31.627817  593872 system_pods.go:89] "kindnet-tl865" [9c700cfd-066f-47c6-aade-257d64dd87fd] Running
	I0920 18:55:31.627821  593872 system_pods.go:89] "kube-apiserver-addons-060912" [af9cd9b5-fbf4-4bb2-b6b8-58e119cc2e54] Running
	I0920 18:55:31.627826  593872 system_pods.go:89] "kube-controller-manager-addons-060912" [e2b17a09-a56a-42f3-885f-853c02ecc200] Running
	I0920 18:55:31.627831  593872 system_pods.go:89] "kube-ingress-dns-minikube" [1b76bbee-eac5-4d2e-b598-514d3650c987] Running
	I0920 18:55:31.627836  593872 system_pods.go:89] "kube-proxy-c522g" [3a56e42d-23c2-4774-b82c-3c6b2daa3a1f] Running
	I0920 18:55:31.627840  593872 system_pods.go:89] "kube-scheduler-addons-060912" [a6533c75-ea94-4da5-bb5e-7a23d9d92d69] Running
	I0920 18:55:31.627844  593872 system_pods.go:89] "metrics-server-84c5f94fbc-6n52n" [707188cc-7e99-491b-b510-82f0f9320fee] Running
	I0920 18:55:31.627863  593872 system_pods.go:89] "nvidia-device-plugin-daemonset-6c4pc" [70208489-2144-41c7-b72c-895d0344ccd9] Running
	I0920 18:55:31.627867  593872 system_pods.go:89] "registry-66c9cd494c-w8gt6" [ded46fe6-d8da-4546-81fd-d1f1949dcadb] Running
	I0920 18:55:31.627873  593872 system_pods.go:89] "registry-proxy-8ghgp" [5a98470b-31f7-4f1c-9586-f681f375453b] Running
	I0920 18:55:31.627879  593872 system_pods.go:89] "snapshot-controller-56fcc65765-r8g9v" [b22e42d4-0119-4486-b078-a8a3532a14c2] Running
	I0920 18:55:31.627884  593872 system_pods.go:89] "snapshot-controller-56fcc65765-wp8r8" [0aa17fbb-ebc2-41dc-8a5a-de69a6f62b73] Running
	I0920 18:55:31.627888  593872 system_pods.go:89] "storage-provisioner" [76adfe52-d569-4e95-82f8-414bc1dcbc24] Running
	I0920 18:55:31.627898  593872 system_pods.go:126] duration metric: took 10.426903ms to wait for k8s-apps to be running ...
	I0920 18:55:31.627918  593872 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 18:55:31.627995  593872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:55:31.639886  593872 system_svc.go:56] duration metric: took 11.957384ms WaitForService to wait for kubelet
	I0920 18:55:31.639916  593872 kubeadm.go:582] duration metric: took 2m24.981492962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 18:55:31.639936  593872 node_conditions.go:102] verifying NodePressure condition ...
	I0920 18:55:31.643318  593872 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 18:55:31.643354  593872 node_conditions.go:123] node cpu capacity is 2
	I0920 18:55:31.643367  593872 node_conditions.go:105] duration metric: took 3.425286ms to run NodePressure ...
	I0920 18:55:31.643399  593872 start.go:241] waiting for startup goroutines ...
	I0920 18:55:31.643414  593872 start.go:246] waiting for cluster config update ...
	I0920 18:55:31.643431  593872 start.go:255] writing updated cluster config ...
	I0920 18:55:31.643750  593872 ssh_runner.go:195] Run: rm -f paused
	I0920 18:55:31.999069  593872 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 18:55:32.001537  593872 out.go:177] * Done! kubectl is now configured to use "addons-060912" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 19:09:01 addons-060912 crio[963]: time="2024-09-20 19:09:01.897016258Z" level=info msg="Stopped pod sandbox (already stopped): fa7c4458c6139586a593cd6d6b36fecbffbd062a55daef0b474381538eace24a" id=df7b9201-abd0-4225-8e8a-4ce08ac85378 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:09:01 addons-060912 crio[963]: time="2024-09-20 19:09:01.897391304Z" level=info msg="Removing pod sandbox: fa7c4458c6139586a593cd6d6b36fecbffbd062a55daef0b474381538eace24a" id=2ddb8719-4a9f-40e4-96cb-49f414018beb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 19:09:01 addons-060912 crio[963]: time="2024-09-20 19:09:01.904884386Z" level=info msg="Removed pod sandbox: fa7c4458c6139586a593cd6d6b36fecbffbd062a55daef0b474381538eace24a" id=2ddb8719-4a9f-40e4-96cb-49f414018beb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 20 19:09:06 addons-060912 crio[963]: time="2024-09-20 19:09:06.400726913Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3f7386ab-9570-48b5-bddf-5134c58c54ba name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:09:06 addons-060912 crio[963]: time="2024-09-20 19:09:06.400963441Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3f7386ab-9570-48b5-bddf-5134c58c54ba name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:09:19 addons-060912 crio[963]: time="2024-09-20 19:09:19.401828514Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=455ae86b-44d1-4d33-8fe1-1af2ef488017 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:09:19 addons-060912 crio[963]: time="2024-09-20 19:09:19.402056787Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=455ae86b-44d1-4d33-8fe1-1af2ef488017 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:09:34 addons-060912 crio[963]: time="2024-09-20 19:09:34.401832639Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9b6b4830-4bc6-41c2-bdd4-7ee66ae286bf name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:09:34 addons-060912 crio[963]: time="2024-09-20 19:09:34.402079604Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9b6b4830-4bc6-41c2-bdd4-7ee66ae286bf name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:09:49 addons-060912 crio[963]: time="2024-09-20 19:09:49.400637148Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=abf02c60-b7cd-4d4b-bd99-d7bde1f9cc02 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:09:49 addons-060912 crio[963]: time="2024-09-20 19:09:49.401703899Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=abf02c60-b7cd-4d4b-bd99-d7bde1f9cc02 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:10:02 addons-060912 crio[963]: time="2024-09-20 19:10:02.400687630Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=130bf2f7-b7f5-413b-a465-4897271e6c89 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:10:02 addons-060912 crio[963]: time="2024-09-20 19:10:02.400925118Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=130bf2f7-b7f5-413b-a465-4897271e6c89 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:10:16 addons-060912 crio[963]: time="2024-09-20 19:10:16.401003404Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9207f43d-68f6-427a-b9f3-26743368b83d name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:10:16 addons-060912 crio[963]: time="2024-09-20 19:10:16.401249992Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9207f43d-68f6-427a-b9f3-26743368b83d name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:10:28 addons-060912 crio[963]: time="2024-09-20 19:10:28.400772423Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b4b0b89c-8b28-48da-ae3f-a9a04432d1b9 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:10:28 addons-060912 crio[963]: time="2024-09-20 19:10:28.401006776Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b4b0b89c-8b28-48da-ae3f-a9a04432d1b9 name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:10:38 addons-060912 crio[963]: time="2024-09-20 19:10:38.843449988Z" level=info msg="Stopping container: 26abc82a1efc959e29bc2f628918dd4eca7bf86c2753158811a6cdbb917d1402 (timeout: 30s)" id=88563fb1-4c13-4ec5-be03-2aff59cb3e7d name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:10:39 addons-060912 crio[963]: time="2024-09-20 19:10:39.401046479Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=471e504a-f63e-40cf-b7d9-dd25f33ff00b name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:10:39 addons-060912 crio[963]: time="2024-09-20 19:10:39.401296717Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=471e504a-f63e-40cf-b7d9-dd25f33ff00b name=/runtime.v1.ImageService/ImageStatus
	Sep 20 19:10:40 addons-060912 crio[963]: time="2024-09-20 19:10:40.039735957Z" level=info msg="Stopped container 26abc82a1efc959e29bc2f628918dd4eca7bf86c2753158811a6cdbb917d1402: kube-system/metrics-server-84c5f94fbc-6n52n/metrics-server" id=88563fb1-4c13-4ec5-be03-2aff59cb3e7d name=/runtime.v1.RuntimeService/StopContainer
	Sep 20 19:10:40 addons-060912 crio[963]: time="2024-09-20 19:10:40.040985091Z" level=info msg="Stopping pod sandbox: d47ebd6d1ffd23e42bf48526d99f9ad9568682ee3a32640e99a7bf05968785f5" id=4ce29656-b248-4a03-a74e-7bace668a0ce name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 20 19:10:40 addons-060912 crio[963]: time="2024-09-20 19:10:40.053234893Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-6n52n Namespace:kube-system ID:d47ebd6d1ffd23e42bf48526d99f9ad9568682ee3a32640e99a7bf05968785f5 UID:707188cc-7e99-491b-b510-82f0f9320fee NetNS:/var/run/netns/dd2a676e-c5a4-4e88-867f-d1f7da0f51c4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 20 19:10:40 addons-060912 crio[963]: time="2024-09-20 19:10:40.053458966Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-6n52n from CNI network \"kindnet\" (type=ptp)"
	Sep 20 19:10:40 addons-060912 crio[963]: time="2024-09-20 19:10:40.088603664Z" level=info msg="Stopped pod sandbox: d47ebd6d1ffd23e42bf48526d99f9ad9568682ee3a32640e99a7bf05968785f5" id=4ce29656-b248-4a03-a74e-7bace668a0ce name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	167a4fd5fe5fe       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   f7bcf6d431e23       hello-world-app-55bf9c44b4-92l5t
	bb932ecde10ba       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                         5 minutes ago       Running             nginx                     0                   d3a5d7aa5e6d5       nginx
	4a43484742705       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            15 minutes ago      Running             gcp-auth                  0                   41279eea3be85       gcp-auth-89d5ffd79-lnzdp
	26abc82a1efc9       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   16 minutes ago      Exited              metrics-server            0                   d47ebd6d1ffd2       metrics-server-84c5f94fbc-6n52n
	b6bb91f96aedc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        16 minutes ago      Running             storage-provisioner       0                   8bd9bba6c8fc6       storage-provisioner
	1a880bc579bf0       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        16 minutes ago      Running             coredns                   0                   5b3730f2d41b7       coredns-7c65d6cfc9-cl27s
	b8685b3b7a398       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                        17 minutes ago      Running             kindnet-cni               0                   a3e64840ab606       kindnet-tl865
	6b08aa03c509c       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        17 minutes ago      Running             kube-proxy                0                   16ec6dded1779       kube-proxy-c522g
	4ecd6cb0f6955       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        17 minutes ago      Running             kube-controller-manager   0                   cf3a116aeab5b       kube-controller-manager-addons-060912
	0f324b0fef4f9       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        17 minutes ago      Running             kube-scheduler            0                   3d36f26aa452e       kube-scheduler-addons-060912
	8bee65ae4a888       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        17 minutes ago      Running             kube-apiserver            0                   33b4572492492       kube-apiserver-addons-060912
	ea2efa9e4710b       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        17 minutes ago      Running             etcd                      0                   f7b5fa9394991       etcd-addons-060912
	
	
	==> coredns [1a880bc579bf0164b532480580911ed58aba250cf26f9f07f9ed24de63f8174f] <==
	[INFO] 10.244.0.18:55683 - 36665 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078695s
	[INFO] 10.244.0.18:40033 - 64165 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002757275s
	[INFO] 10.244.0.18:40033 - 10146 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002422465s
	[INFO] 10.244.0.18:44258 - 36529 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00067368s
	[INFO] 10.244.0.18:44258 - 3251 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000648646s
	[INFO] 10.244.0.18:48701 - 30933 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000114313s
	[INFO] 10.244.0.18:48701 - 3798 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177706s
	[INFO] 10.244.0.18:43291 - 11795 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065912s
	[INFO] 10.244.0.18:43291 - 45806 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000060553s
	[INFO] 10.244.0.18:54945 - 47277 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055015s
	[INFO] 10.244.0.18:54945 - 42927 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080122s
	[INFO] 10.244.0.18:54866 - 8361 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001695403s
	[INFO] 10.244.0.18:54866 - 41643 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001756359s
	[INFO] 10.244.0.18:33956 - 27160 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000067043s
	[INFO] 10.244.0.18:33956 - 20762 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000052537s
	[INFO] 10.244.0.20:52499 - 34827 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000205341s
	[INFO] 10.244.0.20:36942 - 16052 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000363782s
	[INFO] 10.244.0.20:52995 - 29444 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000351162s
	[INFO] 10.244.0.20:44078 - 60085 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000383892s
	[INFO] 10.244.0.20:54831 - 11107 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000215918s
	[INFO] 10.244.0.20:42723 - 50453 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000198564s
	[INFO] 10.244.0.20:33980 - 22876 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003708615s
	[INFO] 10.244.0.20:36030 - 39141 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003676779s
	[INFO] 10.244.0.20:46057 - 16877 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005906983s
	[INFO] 10.244.0.20:59156 - 51441 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.005414694s
	
	
	==> describe nodes <==
	Name:               addons-060912
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-060912
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=addons-060912
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T18_53_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-060912
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 18:52:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-060912
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:10:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:08:39 +0000   Fri, 20 Sep 2024 18:52:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:08:39 +0000   Fri, 20 Sep 2024 18:52:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:08:39 +0000   Fri, 20 Sep 2024 18:52:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:08:39 +0000   Fri, 20 Sep 2024 18:53:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-060912
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 10dc14ff36a34258b0be727d4ac3c9e0
	  System UUID:                f67c7638-9fc9-4a4c-946b-9e8a422e1126
	  Boot ID:                    b363b069-6c72-47b0-a80b-36cf6b75e261
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-world-app-55bf9c44b4-92l5t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  gcp-auth                    gcp-auth-89d5ffd79-lnzdp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-7c65d6cfc9-cl27s                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-addons-060912                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-tl865                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-addons-060912             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-060912    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-c522g                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-060912             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node addons-060912 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node addons-060912 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node addons-060912 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m (x2 over 17m)  kubelet          Node addons-060912 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x2 over 17m)  kubelet          Node addons-060912 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x2 over 17m)  kubelet          Node addons-060912 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                node-controller  Node addons-060912 event: Registered Node addons-060912 in Controller
	  Normal   NodeReady                16m                kubelet          Node addons-060912 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [ea2efa9e4710ba21d601ca0fc1c54d51c8be43913a5692ba729c377915af4395] <==
	{"level":"info","ts":"2024-09-20T18:52:55.963430Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:52:55.963737Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:55.967032Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T18:52:55.967327Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T18:52:55.967356Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T18:52:55.967795Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:52:55.967940Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T18:52:55.968794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T18:52:55.971183Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:55.975635Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:55.975703Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T18:52:55.979680Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T18:53:07.939493Z","caller":"traceutil/trace.go:171","msg":"trace[436272735] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"123.951195ms","start":"2024-09-20T18:53:07.815524Z","end":"2024-09-20T18:53:07.939475Z","steps":["trace[436272735] 'process raft request'  (duration: 87.482157ms)","trace[436272735] 'compare'  (duration: 36.052588ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T18:53:08.037242Z","caller":"traceutil/trace.go:171","msg":"trace[849823426] linearizableReadLoop","detail":"{readStateIndex:392; appliedIndex:391; }","duration":"221.629804ms","start":"2024-09-20T18:53:07.815591Z","end":"2024-09-20T18:53:08.037220Z","steps":["trace[849823426] 'read index received'  (duration: 447.974µs)","trace[849823426] 'applied index is now lower than readState.Index'  (duration: 221.179918ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T18:53:08.037369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.740244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-09-20T18:53:08.159742Z","caller":"traceutil/trace.go:171","msg":"trace[402661395] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:385; }","duration":"282.386041ms","start":"2024-09-20T18:53:07.815587Z","end":"2024-09-20T18:53:08.097973Z","steps":["trace[402661395] 'agreement among raft nodes before linearized reading'  (duration: 221.690792ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T18:53:08.159844Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T18:53:07.815565Z","time spent":"344.255539ms","remote":"127.0.0.1:37374","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":636,"request content":"key:\"/registry/configmaps/kube-system/coredns\" "}
	{"level":"warn","ts":"2024-09-20T18:53:09.679754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.759103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4034"}
	{"level":"info","ts":"2024-09-20T18:53:09.680079Z","caller":"traceutil/trace.go:171","msg":"trace[154019258] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:389; }","duration":"118.292131ms","start":"2024-09-20T18:53:09.561774Z","end":"2024-09-20T18:53:09.680066Z","steps":["trace[154019258] 'range keys from in-memory index tree'  (duration: 117.688178ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:02:56.467684Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1571}
	{"level":"info","ts":"2024-09-20T19:02:56.500782Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1571,"took":"32.660846ms","hash":1649838481,"current-db-size-bytes":6402048,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":3543040,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-20T19:02:56.500833Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1649838481,"revision":1571,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T19:07:56.473479Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1988}
	{"level":"info","ts":"2024-09-20T19:07:56.491216Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1988,"took":"17.112274ms","hash":130007652,"current-db-size-bytes":6402048,"current-db-size":"6.4 MB","current-db-size-in-use-bytes":4730880,"current-db-size-in-use":"4.7 MB"}
	{"level":"info","ts":"2024-09-20T19:07:56.491262Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":130007652,"revision":1988,"compact-revision":1571}
	
	
	==> gcp-auth [4a43484742705aed20cd218f80a63f0e4090a96ee4ee0cef03af1f076f0bfd2b] <==
	2024/09/20 18:55:32 Ready to write response ...
	2024/09/20 18:55:32 Ready to marshal response ...
	2024/09/20 18:55:32 Ready to write response ...
	2024/09/20 19:03:36 Ready to marshal response ...
	2024/09/20 19:03:36 Ready to write response ...
	2024/09/20 19:03:36 Ready to marshal response ...
	2024/09/20 19:03:36 Ready to write response ...
	2024/09/20 19:03:36 Ready to marshal response ...
	2024/09/20 19:03:36 Ready to write response ...
	2024/09/20 19:03:47 Ready to marshal response ...
	2024/09/20 19:03:47 Ready to write response ...
	2024/09/20 19:04:03 Ready to marshal response ...
	2024/09/20 19:04:03 Ready to write response ...
	2024/09/20 19:04:37 Ready to marshal response ...
	2024/09/20 19:04:37 Ready to write response ...
	2024/09/20 19:05:07 Ready to marshal response ...
	2024/09/20 19:05:07 Ready to write response ...
	2024/09/20 19:07:26 Ready to marshal response ...
	2024/09/20 19:07:26 Ready to write response ...
	2024/09/20 19:07:57 Ready to marshal response ...
	2024/09/20 19:07:57 Ready to write response ...
	2024/09/20 19:07:57 Ready to marshal response ...
	2024/09/20 19:07:57 Ready to write response ...
	2024/09/20 19:08:05 Ready to marshal response ...
	2024/09/20 19:08:05 Ready to write response ...
	
	
	==> kernel <==
	 19:10:40 up  2:53,  0 users,  load average: 0.29, 0.48, 1.18
	Linux addons-060912 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b8685b3b7a3987088251541f11659df517d059b87e9de4097a4c48ea8553f83b] <==
	I0920 19:08:39.469914       1 main.go:299] handling current node
	I0920 19:08:49.469623       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:08:49.469753       1 main.go:299] handling current node
	I0920 19:08:59.470154       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:08:59.470899       1 main.go:299] handling current node
	I0920 19:09:09.470403       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:09:09.470445       1 main.go:299] handling current node
	I0920 19:09:19.473003       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:09:19.473052       1 main.go:299] handling current node
	I0920 19:09:29.470548       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:09:29.470735       1 main.go:299] handling current node
	I0920 19:09:39.475240       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:09:39.475275       1 main.go:299] handling current node
	I0920 19:09:49.475093       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:09:49.475128       1 main.go:299] handling current node
	I0920 19:09:59.469624       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:09:59.469747       1 main.go:299] handling current node
	I0920 19:10:09.469596       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:10:09.469632       1 main.go:299] handling current node
	I0920 19:10:19.477136       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:10:19.477211       1 main.go:299] handling current node
	I0920 19:10:29.469631       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:10:29.469665       1 main.go:299] handling current node
	I0920 19:10:39.469627       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:10:39.469767       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8bee65ae4a8880696f986d8fd89501ca5d8a64a824966964abd14bdac6eeaaef] <==
	 > logger="UnhandledError"
	E0920 18:55:20.383384       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.60.42:443: connect: connection refused" logger="UnhandledError"
	E0920 18:55:20.385944       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.60.42:443: connect: connection refused" logger="UnhandledError"
	E0920 18:55:20.391750       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.60.42:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.60.42:443: connect: connection refused" logger="UnhandledError"
	I0920 18:55:20.475621       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 19:03:36.739548       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.251.149"}
	I0920 19:04:15.141850       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 19:04:54.811710       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:04:54.811767       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:04:54.840211       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:04:54.840257       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:04:54.872229       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:04:54.872286       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 19:04:55.122560       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 19:04:55.122691       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 19:04:55.850688       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 19:04:56.123803       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 19:04:56.145936       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0920 19:05:01.083961       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 19:05:02.112962       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 19:05:06.770356       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 19:05:07.101143       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.191.188"}
	I0920 19:07:26.475827       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.94.229"}
	E0920 19:07:28.875842       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0920 19:08:21.862643       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [4ecd6cb0f69552b2d40ec8543f50e007904b62462d6abbbbe961863d795a4831] <==
	E0920 19:08:35.843740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 19:08:39.399398       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-060912"
	I0920 19:08:54.121972       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0920 19:08:55.681311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5b584cc74" duration="7.663µs"
	W0920 19:08:58.151225       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:08:58.151267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:09:09.284869       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:09:09.285267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:09:10.525291       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:09:10.525352       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:09:28.086705       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:09:28.086751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:09:36.549801       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:09:36.549844       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:09:49.671116       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:09:49.671158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:09:49.807306       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:09:49.807350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:10:10.147205       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:10:10.147253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:10:26.108796       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:10:26.108838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 19:10:26.128891       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 19:10:26.128936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 19:10:38.821826       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="7.139µs"
	
	
	==> kube-proxy [6b08aa03c509ceee25e8c05e283855fdd301507c980f70586a012834c72dd6b5] <==
	I0920 18:53:11.974563       1 server_linux.go:66] "Using iptables proxy"
	I0920 18:53:12.292405       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 18:53:12.292610       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 18:53:12.395134       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 18:53:12.395264       1 server_linux.go:169] "Using iptables Proxier"
	I0920 18:53:12.411910       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 18:53:12.466911       1 server.go:483] "Version info" version="v1.31.1"
	I0920 18:53:12.467076       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 18:53:12.468804       1 config.go:199] "Starting service config controller"
	I0920 18:53:12.468933       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 18:53:12.469006       1 config.go:105] "Starting endpoint slice config controller"
	I0920 18:53:12.469013       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 18:53:12.469600       1 config.go:328] "Starting node config controller"
	I0920 18:53:12.469649       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 18:53:12.569297       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 18:53:12.579772       1 shared_informer.go:320] Caches are synced for service config
	I0920 18:53:12.619741       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0f324b0fef4f943cbb8945c41237ab9b082f97ce9c4e465767aa506c3a9d8a0f] <==
	W0920 18:52:59.368286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 18:52:59.368811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.368955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 18:52:59.369008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.369112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 18:52:59.369165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.369263       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0920 18:52:59.368369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 18:52:59.369732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.369322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.370408       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 18:52:59.371930       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 18:52:59.373052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 18:52:59.374756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 18:52:59.374808       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0920 18:52:59.373291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0920 18:52:59.373528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0920 18:52:59.373573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0920 18:52:59.374022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 18:52:59.376432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.376123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.376613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.376147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 18:52:59.377169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 18:53:00.561937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:09:49 addons-060912 kubelet[1488]: E0920 19:09:49.401986    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="06f0a745-85f9-4338-bb9f-bce49e7ec861"
	Sep 20 19:09:51 addons-060912 kubelet[1488]: E0920 19:09:51.769042    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859391768789234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:09:51 addons-060912 kubelet[1488]: E0920 19:09:51.769075    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859391768789234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:10:01 addons-060912 kubelet[1488]: E0920 19:10:01.772365    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859401772109999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:10:01 addons-060912 kubelet[1488]: E0920 19:10:01.772864    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859401772109999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:10:02 addons-060912 kubelet[1488]: E0920 19:10:02.401179    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="06f0a745-85f9-4338-bb9f-bce49e7ec861"
	Sep 20 19:10:11 addons-060912 kubelet[1488]: E0920 19:10:11.775346    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859411775005275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:10:11 addons-060912 kubelet[1488]: E0920 19:10:11.775385    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859411775005275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:10:16 addons-060912 kubelet[1488]: E0920 19:10:16.401677    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="06f0a745-85f9-4338-bb9f-bce49e7ec861"
	Sep 20 19:10:21 addons-060912 kubelet[1488]: E0920 19:10:21.778471    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859421778209004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:10:21 addons-060912 kubelet[1488]: E0920 19:10:21.778511    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859421778209004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:10:28 addons-060912 kubelet[1488]: E0920 19:10:28.401244    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="06f0a745-85f9-4338-bb9f-bce49e7ec861"
	Sep 20 19:10:31 addons-060912 kubelet[1488]: E0920 19:10:31.780886    1488 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859431780664895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:10:31 addons-060912 kubelet[1488]: E0920 19:10:31.780924    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726859431780664895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 19:10:39 addons-060912 kubelet[1488]: E0920 19:10:39.401704    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="06f0a745-85f9-4338-bb9f-bce49e7ec861"
	Sep 20 19:10:40 addons-060912 kubelet[1488]: I0920 19:10:40.158091    1488 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/707188cc-7e99-491b-b510-82f0f9320fee-tmp-dir\") pod \"707188cc-7e99-491b-b510-82f0f9320fee\" (UID: \"707188cc-7e99-491b-b510-82f0f9320fee\") "
	Sep 20 19:10:40 addons-060912 kubelet[1488]: I0920 19:10:40.158159    1488 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6ctf\" (UniqueName: \"kubernetes.io/projected/707188cc-7e99-491b-b510-82f0f9320fee-kube-api-access-l6ctf\") pod \"707188cc-7e99-491b-b510-82f0f9320fee\" (UID: \"707188cc-7e99-491b-b510-82f0f9320fee\") "
	Sep 20 19:10:40 addons-060912 kubelet[1488]: I0920 19:10:40.158839    1488 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/707188cc-7e99-491b-b510-82f0f9320fee-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "707188cc-7e99-491b-b510-82f0f9320fee" (UID: "707188cc-7e99-491b-b510-82f0f9320fee"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 20 19:10:40 addons-060912 kubelet[1488]: I0920 19:10:40.166419    1488 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/707188cc-7e99-491b-b510-82f0f9320fee-kube-api-access-l6ctf" (OuterVolumeSpecName: "kube-api-access-l6ctf") pod "707188cc-7e99-491b-b510-82f0f9320fee" (UID: "707188cc-7e99-491b-b510-82f0f9320fee"). InnerVolumeSpecName "kube-api-access-l6ctf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 19:10:40 addons-060912 kubelet[1488]: I0920 19:10:40.259068    1488 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/707188cc-7e99-491b-b510-82f0f9320fee-tmp-dir\") on node \"addons-060912\" DevicePath \"\""
	Sep 20 19:10:40 addons-060912 kubelet[1488]: I0920 19:10:40.259105    1488 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-l6ctf\" (UniqueName: \"kubernetes.io/projected/707188cc-7e99-491b-b510-82f0f9320fee-kube-api-access-l6ctf\") on node \"addons-060912\" DevicePath \"\""
	Sep 20 19:10:40 addons-060912 kubelet[1488]: I0920 19:10:40.551453    1488 scope.go:117] "RemoveContainer" containerID="26abc82a1efc959e29bc2f628918dd4eca7bf86c2753158811a6cdbb917d1402"
	Sep 20 19:10:40 addons-060912 kubelet[1488]: I0920 19:10:40.584574    1488 scope.go:117] "RemoveContainer" containerID="26abc82a1efc959e29bc2f628918dd4eca7bf86c2753158811a6cdbb917d1402"
	Sep 20 19:10:40 addons-060912 kubelet[1488]: E0920 19:10:40.585263    1488 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26abc82a1efc959e29bc2f628918dd4eca7bf86c2753158811a6cdbb917d1402\": container with ID starting with 26abc82a1efc959e29bc2f628918dd4eca7bf86c2753158811a6cdbb917d1402 not found: ID does not exist" containerID="26abc82a1efc959e29bc2f628918dd4eca7bf86c2753158811a6cdbb917d1402"
	Sep 20 19:10:40 addons-060912 kubelet[1488]: I0920 19:10:40.585306    1488 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26abc82a1efc959e29bc2f628918dd4eca7bf86c2753158811a6cdbb917d1402"} err="failed to get container status \"26abc82a1efc959e29bc2f628918dd4eca7bf86c2753158811a6cdbb917d1402\": rpc error: code = NotFound desc = could not find container \"26abc82a1efc959e29bc2f628918dd4eca7bf86c2753158811a6cdbb917d1402\": container with ID starting with 26abc82a1efc959e29bc2f628918dd4eca7bf86c2753158811a6cdbb917d1402 not found: ID does not exist"
	
	
	==> storage-provisioner [b6bb91f96aedcf859be9e5aeb0d364423ca21915d0fb376bd36caefb6936c622] <==
	I0920 18:53:50.915207       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 18:53:50.945131       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 18:53:50.945260       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 18:53:50.953155       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 18:53:50.953416       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-060912_3157c921-ea39-49b8-87b1-669c9d4d53b9!
	I0920 18:53:50.953624       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e274d82f-245d-49e4-a33f-104ef4bee3c3", APIVersion:"v1", ResourceVersion:"947", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-060912_3157c921-ea39-49b8-87b1-669c9d4d53b9 became leader
	I0920 18:53:51.053580       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-060912_3157c921-ea39-49b8-87b1-669c9d4d53b9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-060912 -n addons-060912
helpers_test.go:261: (dbg) Run:  kubectl --context addons-060912 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-060912 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-060912 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-060912/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 18:55:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hgwr8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hgwr8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/busybox to addons-060912
	  Normal   Pulling    13m (x4 over 15m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     13m (x4 over 15m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     13m (x4 over 15m)  kubelet            Error: ErrImagePull
	  Warning  Failed     13m (x6 over 15m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2s (x62 over 15m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (350.48s)

                                                
                                    

Test pass (295/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.67
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.41
9 TestDownloadOnly/v1.20.0/DeleteAll 0.36
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.31.1/json-events 6.8
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.22
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 196.66
31 TestAddons/serial/GCPAuth/Namespaces 0.24
35 TestAddons/parallel/InspektorGadget 11.08
38 TestAddons/parallel/CSI 61.78
39 TestAddons/parallel/Headlamp 17.73
40 TestAddons/parallel/CloudSpanner 6.57
41 TestAddons/parallel/LocalPath 52.33
42 TestAddons/parallel/NvidiaDevicePlugin 6.58
43 TestAddons/parallel/Yakd 11.72
44 TestAddons/StoppedEnableDisable 6.22
45 TestCertOptions 38.17
46 TestCertExpiration 241
48 TestForceSystemdFlag 37.12
49 TestForceSystemdEnv 42.4
55 TestErrorSpam/setup 31.4
56 TestErrorSpam/start 0.74
57 TestErrorSpam/status 1.05
58 TestErrorSpam/pause 1.79
59 TestErrorSpam/unpause 1.79
60 TestErrorSpam/stop 1.44
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 75.2
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 26.26
67 TestFunctional/serial/KubeContext 0.07
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.2
72 TestFunctional/serial/CacheCmd/cache/add_local 1.41
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.95
77 TestFunctional/serial/CacheCmd/cache/delete 0.12
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 54.32
81 TestFunctional/serial/ComponentHealth 0.11
82 TestFunctional/serial/LogsCmd 1.68
83 TestFunctional/serial/LogsFileCmd 1.72
84 TestFunctional/serial/InvalidService 4.68
86 TestFunctional/parallel/ConfigCmd 0.55
87 TestFunctional/parallel/DashboardCmd 10.33
88 TestFunctional/parallel/DryRun 0.41
89 TestFunctional/parallel/InternationalLanguage 0.21
90 TestFunctional/parallel/StatusCmd 1.19
94 TestFunctional/parallel/ServiceCmdConnect 10.69
95 TestFunctional/parallel/AddonsCmd 0.2
96 TestFunctional/parallel/PersistentVolumeClaim 24.71
98 TestFunctional/parallel/SSHCmd 0.8
99 TestFunctional/parallel/CpCmd 2.06
101 TestFunctional/parallel/FileSync 0.34
102 TestFunctional/parallel/CertSync 2.18
106 TestFunctional/parallel/NodeLabels 0.14
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.82
110 TestFunctional/parallel/License 0.3
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.37
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
124 TestFunctional/parallel/ProfileCmd/profile_list 0.4
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
126 TestFunctional/parallel/MountCmd/any-port 8.23
127 TestFunctional/parallel/ServiceCmd/List 0.53
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.62
130 TestFunctional/parallel/ServiceCmd/Format 0.5
131 TestFunctional/parallel/ServiceCmd/URL 0.39
132 TestFunctional/parallel/MountCmd/specific-port 2.28
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.17
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.25
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.58
141 TestFunctional/parallel/ImageCommands/Setup 0.85
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.07
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.2
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.43
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.62
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.96
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.66
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 171.66
159 TestMultiControlPlane/serial/DeployApp 10.5
160 TestMultiControlPlane/serial/PingHostFromPods 1.56
161 TestMultiControlPlane/serial/AddWorkerNode 62.93
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.06
164 TestMultiControlPlane/serial/CopyFile 19.36
165 TestMultiControlPlane/serial/StopSecondaryNode 12.78
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
167 TestMultiControlPlane/serial/RestartSecondaryNode 21.95
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.41
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 271.2
170 TestMultiControlPlane/serial/DeleteSecondaryNode 12.53
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
172 TestMultiControlPlane/serial/StopCluster 35.71
173 TestMultiControlPlane/serial/RestartCluster 124.96
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
175 TestMultiControlPlane/serial/AddSecondaryNode 69.77
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.02
180 TestJSONOutput/start/Command 52.29
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.76
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.66
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.89
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.21
205 TestKicCustomNetwork/create_custom_network 37.84
206 TestKicCustomNetwork/use_default_bridge_network 35.52
207 TestKicExistingNetwork 31.51
208 TestKicCustomSubnet 32.38
209 TestKicStaticIP 32.88
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 68.43
214 TestMountStart/serial/StartWithMountFirst 6.87
215 TestMountStart/serial/VerifyMountFirst 0.26
216 TestMountStart/serial/StartWithMountSecond 9.62
217 TestMountStart/serial/VerifyMountSecond 0.26
218 TestMountStart/serial/DeleteFirst 1.64
219 TestMountStart/serial/VerifyMountPostDelete 0.27
220 TestMountStart/serial/Stop 1.2
221 TestMountStart/serial/RestartStopped 8.14
222 TestMountStart/serial/VerifyMountPostStop 0.27
225 TestMultiNode/serial/FreshStart2Nodes 73.92
226 TestMultiNode/serial/DeployApp2Nodes 6.03
227 TestMultiNode/serial/PingHostFrom2Pods 0.97
228 TestMultiNode/serial/AddNode 57.55
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.7
231 TestMultiNode/serial/CopyFile 10.11
232 TestMultiNode/serial/StopNode 2.31
233 TestMultiNode/serial/StartAfterStop 9.8
234 TestMultiNode/serial/RestartKeepsNodes 103.14
235 TestMultiNode/serial/DeleteNode 5.57
236 TestMultiNode/serial/StopMultiNode 23.96
237 TestMultiNode/serial/RestartMultiNode 53.22
238 TestMultiNode/serial/ValidateNameConflict 34.88
243 TestPreload 127.7
245 TestScheduledStopUnix 109.01
248 TestInsufficientStorage 13.45
249 TestRunningBinaryUpgrade 63.26
251 TestKubernetesUpgrade 389.85
252 TestMissingContainerUpgrade 167.41
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 38.93
256 TestNoKubernetes/serial/StartWithStopK8s 14.97
257 TestNoKubernetes/serial/Start 10.07
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
259 TestNoKubernetes/serial/ProfileList 1.2
260 TestNoKubernetes/serial/Stop 1.27
261 TestNoKubernetes/serial/StartNoArgs 8.21
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
263 TestStoppedBinaryUpgrade/Setup 0.72
264 TestStoppedBinaryUpgrade/Upgrade 70.77
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
274 TestPause/serial/Start 79.68
275 TestPause/serial/SecondStartNoReconfiguration 29.69
276 TestPause/serial/Pause 0.81
277 TestPause/serial/VerifyStatus 0.41
278 TestPause/serial/Unpause 0.92
279 TestPause/serial/PauseAgain 1.21
280 TestPause/serial/DeletePaused 3.13
281 TestPause/serial/VerifyDeletedResources 4.78
289 TestNetworkPlugins/group/false 5.27
294 TestStartStop/group/old-k8s-version/serial/FirstStart 192.25
296 TestStartStop/group/no-preload/serial/FirstStart 63.5
297 TestStartStop/group/old-k8s-version/serial/DeployApp 11.79
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.71
299 TestStartStop/group/old-k8s-version/serial/Stop 12.62
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.3
301 TestStartStop/group/old-k8s-version/serial/SecondStart 130.27
302 TestStartStop/group/no-preload/serial/DeployApp 10.44
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.37
304 TestStartStop/group/no-preload/serial/Stop 12.04
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
306 TestStartStop/group/no-preload/serial/SecondStart 280.51
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
310 TestStartStop/group/old-k8s-version/serial/Pause 3
312 TestStartStop/group/embed-certs/serial/FirstStart 77.69
313 TestStartStop/group/embed-certs/serial/DeployApp 11.36
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
315 TestStartStop/group/embed-certs/serial/Stop 11.98
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
317 TestStartStop/group/embed-certs/serial/SecondStart 266.8
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
321 TestStartStop/group/no-preload/serial/Pause 3.18
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 77.79
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.38
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.99
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 277.1
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
332 TestStartStop/group/embed-certs/serial/Pause 3.07
334 TestStartStop/group/newest-cni/serial/FirstStart 34.42
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
337 TestStartStop/group/newest-cni/serial/Stop 1.25
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
339 TestStartStop/group/newest-cni/serial/SecondStart 16.06
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.63
343 TestStartStop/group/newest-cni/serial/Pause 3.03
344 TestNetworkPlugins/group/auto/Start 80.37
345 TestNetworkPlugins/group/auto/KubeletFlags 0.29
346 TestNetworkPlugins/group/auto/NetCatPod 10.29
347 TestNetworkPlugins/group/auto/DNS 0.19
348 TestNetworkPlugins/group/auto/Localhost 0.17
349 TestNetworkPlugins/group/auto/HairPin 0.16
350 TestNetworkPlugins/group/kindnet/Start 79.69
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.14
353 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
354 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.45
355 TestNetworkPlugins/group/calico/Start 62.97
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.29
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/DNS 0.21
361 TestNetworkPlugins/group/kindnet/Localhost 0.15
362 TestNetworkPlugins/group/kindnet/HairPin 0.15
363 TestNetworkPlugins/group/calico/KubeletFlags 0.3
364 TestNetworkPlugins/group/calico/NetCatPod 11.27
365 TestNetworkPlugins/group/calico/DNS 0.27
366 TestNetworkPlugins/group/calico/Localhost 0.22
367 TestNetworkPlugins/group/calico/HairPin 0.21
368 TestNetworkPlugins/group/custom-flannel/Start 64.57
369 TestNetworkPlugins/group/enable-default-cni/Start 53.64
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.26
372 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
373 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
374 TestNetworkPlugins/group/custom-flannel/DNS 0.27
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
380 TestNetworkPlugins/group/flannel/Start 63.64
381 TestNetworkPlugins/group/bridge/Start 56.28
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
384 TestNetworkPlugins/group/bridge/NetCatPod 10.29
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
386 TestNetworkPlugins/group/flannel/NetCatPod 10.27
387 TestNetworkPlugins/group/bridge/DNS 0.17
388 TestNetworkPlugins/group/bridge/Localhost 0.15
389 TestNetworkPlugins/group/bridge/HairPin 0.17
390 TestNetworkPlugins/group/flannel/DNS 0.17
391 TestNetworkPlugins/group/flannel/Localhost 0.15
392 TestNetworkPlugins/group/flannel/HairPin 0.17
x
+
TestDownloadOnly/v1.20.0/json-events (10.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-469167 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-469167 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.666816822s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 18:52:05.600677  593105 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0920 18:52:05.600766  593105 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-469167
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-469167: exit status 85 (412.829851ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-469167 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |          |
	|         | -p download-only-469167        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:51:54
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:51:54.978284  593110 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:51:54.978425  593110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:51:54.978437  593110 out.go:358] Setting ErrFile to fd 2...
	I0920 18:51:54.978443  593110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:51:54.978680  593110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
	W0920 18:51:54.978818  593110 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19679-586329/.minikube/config/config.json: open /home/jenkins/minikube-integration/19679-586329/.minikube/config/config.json: no such file or directory
	I0920 18:51:54.979264  593110 out.go:352] Setting JSON to true
	I0920 18:51:54.980141  593110 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9265,"bootTime":1726849050,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 18:51:54.980214  593110 start.go:139] virtualization:  
	I0920 18:51:54.983340  593110 out.go:97] [download-only-469167] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0920 18:51:54.983557  593110 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 18:51:54.983633  593110 notify.go:220] Checking for updates...
	I0920 18:51:54.985320  593110 out.go:169] MINIKUBE_LOCATION=19679
	I0920 18:51:54.987868  593110 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:51:54.990518  593110 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	I0920 18:51:54.992499  593110 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	I0920 18:51:54.994509  593110 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 18:51:54.998606  593110 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 18:51:54.998944  593110 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:51:55.042478  593110 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:51:55.042641  593110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:51:55.100244  593110 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 18:51:55.088696918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:51:55.100369  593110 docker.go:318] overlay module found
	I0920 18:51:55.102253  593110 out.go:97] Using the docker driver based on user configuration
	I0920 18:51:55.102283  593110 start.go:297] selected driver: docker
	I0920 18:51:55.102291  593110 start.go:901] validating driver "docker" against <nil>
	I0920 18:51:55.102437  593110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:51:55.158909  593110 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 18:51:55.148577728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:51:55.159218  593110 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:51:55.159518  593110 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 18:51:55.159679  593110 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 18:51:55.161669  593110 out.go:169] Using Docker driver with root privileges
	I0920 18:51:55.163375  593110 cni.go:84] Creating CNI manager for ""
	I0920 18:51:55.163442  593110 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:51:55.163455  593110 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:51:55.163553  593110 start.go:340] cluster config:
	{Name:download-only-469167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-469167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:51:55.165427  593110 out.go:97] Starting "download-only-469167" primary control-plane node in "download-only-469167" cluster
	I0920 18:51:55.165461  593110 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 18:51:55.167107  593110 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 18:51:55.167140  593110 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:51:55.167321  593110 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 18:51:55.183953  593110 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:51:55.184145  593110 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 18:51:55.184279  593110 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:51:55.251106  593110 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0920 18:51:55.251133  593110 cache.go:56] Caching tarball of preloaded images
	I0920 18:51:55.251901  593110 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 18:51:55.254294  593110 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 18:51:55.254318  593110 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0920 18:51:55.347833  593110 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0920 18:51:59.605128  593110 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 18:52:03.913138  593110 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0920 18:52:03.913280  593110 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-469167 host does not exist
	  To start a cluster, run: "minikube start -p download-only-469167"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-469167
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-447269 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-447269 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.804378432s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 18:52:13.403196  593105 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0920 18:52:13.403237  593105 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-447269
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-447269: exit status 85 (66.639861ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-469167 | jenkins | v1.34.0 | 20 Sep 24 18:51 UTC |                     |
	|         | -p download-only-469167        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| delete  | -p download-only-469167        | download-only-469167 | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC | 20 Sep 24 18:52 UTC |
	| start   | -o=json --download-only        | download-only-447269 | jenkins | v1.34.0 | 20 Sep 24 18:52 UTC |                     |
	|         | -p download-only-447269        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 18:52:06
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 18:52:06.642881  593313 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:52:06.643093  593313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:52:06.643133  593313 out.go:358] Setting ErrFile to fd 2...
	I0920 18:52:06.643158  593313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:52:06.643441  593313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
	I0920 18:52:06.643927  593313 out.go:352] Setting JSON to true
	I0920 18:52:06.644885  593313 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9277,"bootTime":1726849050,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 18:52:06.644990  593313 start.go:139] virtualization:  
	I0920 18:52:06.689415  593313 out.go:97] [download-only-447269] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 18:52:06.689620  593313 notify.go:220] Checking for updates...
	I0920 18:52:06.722686  593313 out.go:169] MINIKUBE_LOCATION=19679
	I0920 18:52:06.754497  593313 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:52:06.786375  593313 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	I0920 18:52:06.803352  593313 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	I0920 18:52:06.834529  593313 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 18:52:06.899145  593313 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 18:52:06.899443  593313 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:52:06.919789  593313 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 18:52:06.919897  593313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:52:06.974886  593313 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 18:52:06.964038034 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:52:06.975084  593313 docker.go:318] overlay module found
	I0920 18:52:06.996632  593313 out.go:97] Using the docker driver based on user configuration
	I0920 18:52:06.996670  593313 start.go:297] selected driver: docker
	I0920 18:52:06.996678  593313 start.go:901] validating driver "docker" against <nil>
	I0920 18:52:06.996809  593313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:52:07.047510  593313 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 18:52:07.037463157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 18:52:07.047740  593313 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 18:52:07.048021  593313 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 18:52:07.048188  593313 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 18:52:07.077921  593313 out.go:169] Using Docker driver with root privileges
	I0920 18:52:07.108854  593313 cni.go:84] Creating CNI manager for ""
	I0920 18:52:07.108930  593313 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0920 18:52:07.108940  593313 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 18:52:07.109037  593313 start.go:340] cluster config:
	{Name:download-only-447269 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-447269 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:52:07.121423  593313 out.go:97] Starting "download-only-447269" primary control-plane node in "download-only-447269" cluster
	I0920 18:52:07.121468  593313 cache.go:121] Beginning downloading kic base image for docker with crio
	I0920 18:52:07.123347  593313 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 18:52:07.123400  593313 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:52:07.123507  593313 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 18:52:07.139557  593313 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 18:52:07.139687  593313 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 18:52:07.139711  593313 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 18:52:07.139722  593313 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 18:52:07.139730  593313 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 18:52:07.165662  593313 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0920 18:52:07.165692  593313 cache.go:56] Caching tarball of preloaded images
	I0920 18:52:07.165889  593313 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 18:52:07.168341  593313 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 18:52:07.168381  593313 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I0920 18:52:07.366838  593313 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:8285fc512c7462f100de137f91fcd0ae -> /home/jenkins/minikube-integration/19679-586329/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-447269 host does not exist
	  To start a cluster, run: "minikube start -p download-only-447269"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-447269
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 18:52:14.672491  593105 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-083327 --alsologtostderr --binary-mirror http://127.0.0.1:44087 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-083327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-083327
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-060912
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-060912: exit status 85 (69.78581ms)

                                                
                                                
-- stdout --
	* Profile "addons-060912" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-060912"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-060912
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-060912: exit status 85 (76.035371ms)

                                                
                                                
-- stdout --
	* Profile "addons-060912" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-060912"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (196.66s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-060912 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-060912 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m16.659431448s)
--- PASS: TestAddons/Setup (196.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-060912 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-060912 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.24s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.08s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7nhmd" [9bf681a2-c8a8-4a5e-bde2-c1d00cdd03d4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.049062506s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-060912
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-060912: (6.02697573s)
--- PASS: TestAddons/parallel/InspektorGadget (11.08s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 19:03:53.597500  593105 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 19:03:53.605626  593105 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 19:03:53.605659  593105 kapi.go:107] duration metric: took 8.173155ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 8.183624ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-060912 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-060912 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ac2b9ddf-2163-41ed-8969-7cc188ce6da5] Pending
helpers_test.go:344: "task-pv-pod" [ac2b9ddf-2163-41ed-8969-7cc188ce6da5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ac2b9ddf-2163-41ed-8969-7cc188ce6da5] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00459578s
addons_test.go:528: (dbg) Run:  kubectl --context addons-060912 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-060912 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-060912 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-060912 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-060912 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-060912 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-060912 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1085fea1-7244-40bc-b2a4-30e772154b3a] Pending
helpers_test.go:344: "task-pv-pod-restore" [1085fea1-7244-40bc-b2a4-30e772154b3a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1085fea1-7244-40bc-b2a4-30e772154b3a] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004043769s
addons_test.go:570: (dbg) Run:  kubectl --context addons-060912 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-060912 delete pod task-pv-pod-restore: (1.178672788s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-060912 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-060912 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 addons disable csi-hostpath-driver --alsologtostderr -v=1
2024/09/20 19:04:47 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-060912 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.154107971s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:586: (dbg) Done: out/minikube-linux-arm64 -p addons-060912 addons disable volumesnapshots --alsologtostderr -v=1: (1.143262758s)
--- PASS: TestAddons/parallel/CSI (61.78s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-060912 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-jsqwn" [27ab8bc3-6f7c-4cd8-81f8-d3bc859ecbb9] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-jsqwn" [27ab8bc3-6f7c-4cd8-81f8-d3bc859ecbb9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-jsqwn" [27ab8bc3-6f7c-4cd8-81f8-d3bc859ecbb9] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003757595s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-060912 addons disable headlamp --alsologtostderr -v=1: (5.758967769s)
--- PASS: TestAddons/parallel/Headlamp (17.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-77rvl" [2343a3a8-99c6-45b2-a2bb-055a5907fd0d] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004470382s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-060912
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.33s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-060912 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-060912 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060912 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d8be12d3-35e2-4e77-a82e-495f90d4f283] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d8be12d3-35e2-4e77-a82e-495f90d4f283] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d8be12d3-35e2-4e77-a82e-495f90d4f283] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003817253s
addons_test.go:938: (dbg) Run:  kubectl --context addons-060912 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 ssh "cat /opt/local-path-provisioner/pvc-c835ab22-abe1-4560-b972-0a4131361751_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-060912 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-060912 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-060912 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.24404688s)
--- PASS: TestAddons/parallel/LocalPath (52.33s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6c4pc" [70208489-2144-41c7-b72c-895d0344ccd9] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003213562s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-060912
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-v89pf" [fcc1e7a8-689e-47a3-aad3-a99b19f4a34f] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003943863s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-060912 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-060912 addons disable yakd --alsologtostderr -v=1: (5.716102884s)
--- PASS: TestAddons/parallel/Yakd (11.72s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-060912
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-060912: (5.953541956s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-060912
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-060912
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-060912
--- PASS: TestAddons/StoppedEnableDisable (6.22s)

                                                
                                    
x
+
TestCertOptions (38.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-350363 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-350363 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.441191463s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-350363 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-350363 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-350363 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-350363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-350363
E0920 19:54:30.745217  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-350363: (1.99657191s)
--- PASS: TestCertOptions (38.17s)

                                                
                                    
x
+
TestCertExpiration (241s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-884843 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-884843 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.368186035s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-884843 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-884843 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (16.97258802s)
helpers_test.go:175: Cleaning up "cert-expiration-884843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-884843
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-884843: (2.661109265s)
--- PASS: TestCertExpiration (241.00s)

                                                
                                    
x
+
TestForceSystemdFlag (37.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-529623 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0920 19:52:33.808369  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-529623 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.05521975s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-529623 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-529623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-529623
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-529623: (2.661248847s)
--- PASS: TestForceSystemdFlag (37.12s)

                                                
                                    
x
+
TestForceSystemdEnv (42.4s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-595877 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-595877 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.849444335s)
helpers_test.go:175: Cleaning up "force-systemd-env-595877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-595877
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-595877: (2.553084003s)
--- PASS: TestForceSystemdEnv (42.40s)

                                                
                                    
x
+
TestErrorSpam/setup (31.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-133596 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-133596 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-133596 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-133596 --driver=docker  --container-runtime=crio: (31.394953545s)
--- PASS: TestErrorSpam/setup (31.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 stop: (1.248196813s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-133596 --log_dir /tmp/nospam-133596 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19679-586329/.minikube/files/etc/test/nested/copy/593105/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.2s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-345223 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-345223 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m15.196801879s)
--- PASS: TestFunctional/serial/StartWithProxy (75.20s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 19:12:52.600106  593105 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-345223 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-345223 --alsologtostderr -v=8: (26.261744158s)
functional_test.go:663: soft start took 26.262272656s for "functional-345223" cluster.
I0920 19:13:18.862209  593105 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (26.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-345223 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-345223 cache add registry.k8s.io/pause:3.1: (1.445063112s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-345223 cache add registry.k8s.io/pause:3.3: (1.565163599s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-345223 cache add registry.k8s.io/pause:latest: (1.190859789s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-345223 /tmp/TestFunctionalserialCacheCmdcacheadd_local1663494953/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 cache add minikube-local-cache-test:functional-345223
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 cache delete minikube-local-cache-test:functional-345223
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-345223
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-345223 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (300.154163ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-345223 cache reload: (1.005602611s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 kubectl -- --context functional-345223 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-345223 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (54.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-345223 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-345223 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (54.317994559s)
functional_test.go:761: restart took 54.318094964s for "functional-345223" cluster.
I0920 19:14:21.745438  593105 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (54.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-345223 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-345223 logs: (1.67632551s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 logs --file /tmp/TestFunctionalserialLogsFileCmd944190938/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-345223 logs --file /tmp/TestFunctionalserialLogsFileCmd944190938/001/logs.txt: (1.71963866s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.68s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-345223 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-345223
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-345223: exit status 115 (611.84195ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32582 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-345223 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-345223 config get cpus: exit status 14 (121.643852ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-345223 config get cpus: exit status 14 (68.411281ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-345223 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-345223 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 621191: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.33s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-345223 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-345223 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (178.868798ms)

                                                
                                                
-- stdout --
	* [functional-345223] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:15:02.757176  620620 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:15:02.757390  620620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:15:02.757418  620620 out.go:358] Setting ErrFile to fd 2...
	I0920 19:15:02.757437  620620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:15:02.757718  620620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
	I0920 19:15:02.758137  620620 out.go:352] Setting JSON to false
	I0920 19:15:02.759250  620620 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10653,"bootTime":1726849050,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 19:15:02.759387  620620 start.go:139] virtualization:  
	I0920 19:15:02.761663  620620 out.go:177] * [functional-345223] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:15:02.764060  620620 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:15:02.764131  620620 notify.go:220] Checking for updates...
	I0920 19:15:02.768362  620620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:15:02.770285  620620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	I0920 19:15:02.772470  620620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	I0920 19:15:02.774422  620620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:15:02.776438  620620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:15:02.778986  620620 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:15:02.779609  620620 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:15:02.813317  620620 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:15:02.813444  620620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:15:02.873449  620620 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:15:02.862616207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:15:02.873566  620620 docker.go:318] overlay module found
	I0920 19:15:02.876377  620620 out.go:177] * Using the docker driver based on existing profile
	I0920 19:15:02.878106  620620 start.go:297] selected driver: docker
	I0920 19:15:02.878127  620620 start.go:901] validating driver "docker" against &{Name:functional-345223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-345223 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:15:02.878254  620620 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:15:02.880505  620620 out.go:201] 
	W0920 19:15:02.881958  620620 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 19:15:02.883566  620620 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-345223 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-345223 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-345223 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (211.068568ms)

                                                
                                                
-- stdout --
	* [functional-345223] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:15:02.557308  620575 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:15:02.557517  620575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:15:02.557544  620575 out.go:358] Setting ErrFile to fd 2...
	I0920 19:15:02.557565  620575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:15:02.557947  620575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
	I0920 19:15:02.558377  620575 out.go:352] Setting JSON to false
	I0920 19:15:02.559438  620575 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10653,"bootTime":1726849050,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 19:15:02.559540  620575 start.go:139] virtualization:  
	I0920 19:15:02.567999  620575 out.go:177] * [functional-345223] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0920 19:15:02.575552  620575 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:15:02.575625  620575 notify.go:220] Checking for updates...
	I0920 19:15:02.577483  620575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:15:02.579380  620575 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	I0920 19:15:02.581733  620575 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	I0920 19:15:02.583649  620575 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:15:02.585554  620575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:15:02.587995  620575 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:15:02.588605  620575 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:15:02.614459  620575 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:15:02.614572  620575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:15:02.693782  620575 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:15:02.683872052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:15:02.693891  620575 docker.go:318] overlay module found
	I0920 19:15:02.695958  620575 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 19:15:02.697659  620575 start.go:297] selected driver: docker
	I0920 19:15:02.697674  620575 start.go:901] validating driver "docker" against &{Name:functional-345223 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-345223 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:15:02.697791  620575 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:15:02.700001  620575 out.go:201] 
	W0920 19:15:02.701729  620575 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 19:15:02.703764  620575 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-345223 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-345223 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-88xrd" [8ba346f8-c079-4f27-9b36-eecc8d2ef953] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-88xrd" [8ba346f8-c079-4f27-9b36-eecc8d2ef953] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003943491s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32612
functional_test.go:1675: http://192.168.49.2:32612: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-88xrd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32612
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [60625dae-fc09-47e1-b10c-91a6510b12b2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004356011s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-345223 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-345223 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-345223 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-345223 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2fdd0c14-8b6d-4206-9368-a17aed746801] Pending
helpers_test.go:344: "sp-pod" [2fdd0c14-8b6d-4206-9368-a17aed746801] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2fdd0c14-8b6d-4206-9368-a17aed746801] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003926617s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-345223 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-345223 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-345223 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [960ee94f-7308-477c-8d14-d0b4ef5fb917] Pending
helpers_test.go:344: "sp-pod" [960ee94f-7308-477c-8d14-d0b4ef5fb917] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00437377s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-345223 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh -n functional-345223 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 cp functional-345223:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2580582767/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh -n functional-345223 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh -n functional-345223 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/593105/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "sudo cat /etc/test/nested/copy/593105/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/593105.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "sudo cat /etc/ssl/certs/593105.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/593105.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "sudo cat /usr/share/ca-certificates/593105.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5931052.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "sudo cat /etc/ssl/certs/5931052.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5931052.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "sudo cat /usr/share/ca-certificates/5931052.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-345223 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-345223 ssh "sudo systemctl is-active docker": exit status 1 (444.244441ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-345223 ssh "sudo systemctl is-active containerd": exit status 1 (379.11098ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-345223 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-345223 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-345223 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-345223 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 618392: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-345223 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-345223 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e68fe247-d9b1-4715-ac73-866839512374] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e68fe247-d9b1-4715-ac73-866839512374] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003778777s
I0920 19:14:40.419863  593105 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-345223 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.80.157 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-345223 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-345223 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-345223 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-h7zmd" [3ab4f2bc-8fa2-4ec1-8bf6-fcf982df4c4b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-h7zmd" [3ab4f2bc-8fa2-4ec1-8bf6-fcf982df4c4b] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003206642s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "347.916196ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "52.595786ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "344.350222ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "53.265609ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-345223 /tmp/TestFunctionalparallelMountCmdany-port3004114983/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726859696672576740" to /tmp/TestFunctionalparallelMountCmdany-port3004114983/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726859696672576740" to /tmp/TestFunctionalparallelMountCmdany-port3004114983/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726859696672576740" to /tmp/TestFunctionalparallelMountCmdany-port3004114983/001/test-1726859696672576740
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-345223 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.110176ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 19:14:56.973936  593105 retry.go:31] will retry after 589.536932ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 19:14 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 19:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 19:14 test-1726859696672576740
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh cat /mount-9p/test-1726859696672576740
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-345223 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d4ab2b72-f403-4398-93e1-6d09e4434e05] Pending
helpers_test.go:344: "busybox-mount" [d4ab2b72-f403-4398-93e1-6d09e4434e05] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d4ab2b72-f403-4398-93e1-6d09e4434e05] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d4ab2b72-f403-4398-93e1-6d09e4434e05] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004226172s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-345223 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-345223 /tmp/TestFunctionalparallelMountCmdany-port3004114983/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 service list -o json
functional_test.go:1494: Took "517.619901ms" to run "out/minikube-linux-arm64 -p functional-345223 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32285
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32285
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-345223 /tmp/TestFunctionalparallelMountCmdspecific-port1141537032/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-345223 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (608.476954ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 19:15:05.509100  593105 retry.go:31] will retry after 460.967448ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-345223 /tmp/TestFunctionalparallelMountCmdspecific-port1141537032/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-345223 ssh "sudo umount -f /mount-9p": exit status 1 (323.932324ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-345223 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-345223 /tmp/TestFunctionalparallelMountCmdspecific-port1141537032/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-345223 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1550638228/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-345223 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1550638228/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-345223 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1550638228/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-345223 ssh "findmnt -T" /mount1: (1.260290964s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-345223 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-345223 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1550638228/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-345223 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1550638228/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-345223 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1550638228/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-345223 version -o=json --components: (1.250868794s)
--- PASS: TestFunctional/parallel/Version/components (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-345223 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-345223
localhost/kicbase/echo-server:functional-345223
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-345223 image ls --format short --alsologtostderr:
I0920 19:15:19.508248  623685 out.go:345] Setting OutFile to fd 1 ...
I0920 19:15:19.508482  623685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:15:19.508505  623685 out.go:358] Setting ErrFile to fd 2...
I0920 19:15:19.508525  623685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:15:19.508784  623685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
I0920 19:15:19.509498  623685 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:15:19.509665  623685 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:15:19.510173  623685 cli_runner.go:164] Run: docker container inspect functional-345223 --format={{.State.Status}}
I0920 19:15:19.529871  623685 ssh_runner.go:195] Run: systemctl --version
I0920 19:15:19.529922  623685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345223
I0920 19:15:19.560897  623685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/functional-345223/id_rsa Username:docker}
I0920 19:15:19.664493  623685 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-345223 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/kicbase/echo-server           | functional-345223  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| docker.io/library/nginx                 | alpine             | b887aca7aed61 | 48.4MB |
| docker.io/library/nginx                 | latest             | 195245f0c7927 | 197MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/minikube-local-cache-test     | functional-345223  | 63c85730eacad | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-345223 image ls --format table --alsologtostderr:
I0920 19:15:20.189546  623853 out.go:345] Setting OutFile to fd 1 ...
I0920 19:15:20.189739  623853 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:15:20.189766  623853 out.go:358] Setting ErrFile to fd 2...
I0920 19:15:20.189795  623853 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:15:20.190181  623853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
I0920 19:15:20.191273  623853 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:15:20.199790  623853 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:15:20.200471  623853 cli_runner.go:164] Run: docker container inspect functional-345223 --format={{.State.Status}}
I0920 19:15:20.221611  623853 ssh_runner.go:195] Run: systemctl --version
I0920 19:15:20.221665  623853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345223
I0920 19:15:20.239939  623853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/functional-345223/id_rsa Username:docker}
I0920 19:15:20.341084  623853 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-345223 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["
gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-345223"],"size":"4788229"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"
id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"63c85730eacad5c19654e31ca1b9a1dbdca923de81b8653d9009483f712d49cd","repoDigests":["localhost/minikube-local-cache-test@sha256:3c40488c9caf576e8d2586d3eb74c6f27b14f56e22e7562af3357c853b21faff"],"repoTags":["localhost/minikube-local-cache-test:functional-345223"],"size":"3330"},{"id"
:"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425
b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53","docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48375489"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172029"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/
kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"5
28622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-345223 image ls --format json --alsologtostderr:
I0920 19:15:19.873738  623765 out.go:345] Setting OutFile to fd 1 ...
I0920 19:15:19.873930  623765 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:15:19.873936  623765 out.go:358] Setting ErrFile to fd 2...
I0920 19:15:19.873942  623765 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:15:19.874259  623765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
I0920 19:15:19.876397  623765 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:15:19.876558  623765 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:15:19.877181  623765 cli_runner.go:164] Run: docker container inspect functional-345223 --format={{.State.Status}}
I0920 19:15:19.901334  623765 ssh_runner.go:195] Run: systemctl --version
I0920 19:15:19.901412  623765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345223
I0920 19:15:19.933755  623765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/functional-345223/id_rsa Username:docker}
I0920 19:15:20.033829  623765 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-345223 image ls --format yaml --alsologtostderr:
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "48375489"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171
repoTags:
- docker.io/library/nginx:latest
size: "197172029"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-345223
size: "4788229"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 63c85730eacad5c19654e31ca1b9a1dbdca923de81b8653d9009483f712d49cd
repoDigests:
- localhost/minikube-local-cache-test@sha256:3c40488c9caf576e8d2586d3eb74c6f27b14f56e22e7562af3357c853b21faff
repoTags:
- localhost/minikube-local-cache-test:functional-345223
size: "3330"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-345223 image ls --format yaml --alsologtostderr:
I0920 19:15:19.566621  623695 out.go:345] Setting OutFile to fd 1 ...
I0920 19:15:19.566797  623695 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:15:19.566804  623695 out.go:358] Setting ErrFile to fd 2...
I0920 19:15:19.566809  623695 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:15:19.567057  623695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
I0920 19:15:19.567707  623695 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:15:19.567815  623695 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:15:19.568335  623695 cli_runner.go:164] Run: docker container inspect functional-345223 --format={{.State.Status}}
I0920 19:15:19.602281  623695 ssh_runner.go:195] Run: systemctl --version
I0920 19:15:19.602376  623695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345223
I0920 19:15:19.622751  623695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/functional-345223/id_rsa Username:docker}
I0920 19:15:19.723760  623695 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-345223 ssh pgrep buildkitd: exit status 1 (326.913508ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image build -t localhost/my-image:functional-345223 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-345223 image build -t localhost/my-image:functional-345223 testdata/build --alsologtostderr: (3.018782166s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-345223 image build -t localhost/my-image:functional-345223 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9d35fc0080f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-345223
--> f300e4b889b
Successfully tagged localhost/my-image:functional-345223
f300e4b889b41e73d518aecce36cb1de28ec1c18457be56fd749df7295bf48d1
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-345223 image build -t localhost/my-image:functional-345223 testdata/build --alsologtostderr:
I0920 19:15:20.111172  623841 out.go:345] Setting OutFile to fd 1 ...
I0920 19:15:20.111904  623841 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:15:20.111945  623841 out.go:358] Setting ErrFile to fd 2...
I0920 19:15:20.111969  623841 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:15:20.112308  623841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
I0920 19:15:20.113189  623841 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:15:20.113996  623841 config.go:182] Loaded profile config "functional-345223": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 19:15:20.114611  623841 cli_runner.go:164] Run: docker container inspect functional-345223 --format={{.State.Status}}
I0920 19:15:20.141205  623841 ssh_runner.go:195] Run: systemctl --version
I0920 19:15:20.141255  623841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-345223
I0920 19:15:20.178787  623841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/functional-345223/id_rsa Username:docker}
I0920 19:15:20.279706  623841 build_images.go:161] Building image from path: /tmp/build.1384469521.tar
I0920 19:15:20.279790  623841 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 19:15:20.289720  623841 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1384469521.tar
I0920 19:15:20.293969  623841 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1384469521.tar: stat -c "%s %y" /var/lib/minikube/build/build.1384469521.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1384469521.tar': No such file or directory
I0920 19:15:20.293999  623841 ssh_runner.go:362] scp /tmp/build.1384469521.tar --> /var/lib/minikube/build/build.1384469521.tar (3072 bytes)
I0920 19:15:20.320852  623841 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1384469521
I0920 19:15:20.330277  623841 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1384469521 -xf /var/lib/minikube/build/build.1384469521.tar
I0920 19:15:20.340743  623841 crio.go:315] Building image: /var/lib/minikube/build/build.1384469521
I0920 19:15:20.340815  623841 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-345223 /var/lib/minikube/build/build.1384469521 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0920 19:15:23.031400  623841 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-345223 /var/lib/minikube/build/build.1384469521 --cgroup-manager=cgroupfs: (2.690561668s)
I0920 19:15:23.031476  623841 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1384469521
I0920 19:15:23.040459  623841 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1384469521.tar
I0920 19:15:23.049917  623841 build_images.go:217] Built localhost/my-image:functional-345223 from /tmp/build.1384469521.tar
I0920 19:15:23.049950  623841 build_images.go:133] succeeded building to: functional-345223
I0920 19:15:23.049956  623841 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-345223
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image load --daemon kicbase/echo-server:functional-345223 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-345223 image load --daemon kicbase/echo-server:functional-345223 --alsologtostderr: (1.778261938s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image ls
2024/09/20 19:15:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image load --daemon kicbase/echo-server:functional-345223 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-345223
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image load --daemon kicbase/echo-server:functional-345223 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image save kicbase/echo-server:functional-345223 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image rm kicbase/echo-server:functional-345223 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-345223
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-345223 image save --daemon kicbase/echo-server:functional-345223 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-345223
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.66s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-345223
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-345223
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-345223
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (171.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-081084 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0920 19:15:32.536840  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:32.543142  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:32.554551  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:32.575896  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:32.617241  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:32.698664  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:32.859913  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:33.181605  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:33.823789  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:35.105638  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:37.667181  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:42.789288  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:15:53.030633  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:16:13.511938  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:16:54.474129  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:18:16.396321  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-081084 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m50.799248371s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (171.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-081084 -- rollout status deployment/busybox: (7.523637662s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-8bwjs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-gfwrn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-rqjh8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-8bwjs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-gfwrn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-rqjh8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-8bwjs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-gfwrn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-rqjh8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-8bwjs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-8bwjs -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-gfwrn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-gfwrn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-rqjh8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-081084 -- exec busybox-7dff88458-rqjh8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (62.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-081084 -v=7 --alsologtostderr
E0920 19:19:30.741468  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:30.747775  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:30.759197  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:30.780642  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:30.822039  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:30.903466  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:31.065358  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:19:31.386756  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-081084 -v=7 --alsologtostderr: (1m1.953289924s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 status -v=7 --alsologtostderr
E0920 19:19:32.028254  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (62.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-081084 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0920 19:19:33.310428  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.063309922s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp testdata/cp-test.txt ha-081084:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3049606023/001/cp-test_ha-081084.txt
E0920 19:19:35.872205  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084:/home/docker/cp-test.txt ha-081084-m02:/home/docker/cp-test_ha-081084_ha-081084-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m02 "sudo cat /home/docker/cp-test_ha-081084_ha-081084-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084:/home/docker/cp-test.txt ha-081084-m03:/home/docker/cp-test_ha-081084_ha-081084-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m03 "sudo cat /home/docker/cp-test_ha-081084_ha-081084-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084:/home/docker/cp-test.txt ha-081084-m04:/home/docker/cp-test_ha-081084_ha-081084-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m04 "sudo cat /home/docker/cp-test_ha-081084_ha-081084-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp testdata/cp-test.txt ha-081084-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3049606023/001/cp-test_ha-081084-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084-m02:/home/docker/cp-test.txt ha-081084:/home/docker/cp-test_ha-081084-m02_ha-081084.txt
E0920 19:19:40.993696  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084 "sudo cat /home/docker/cp-test_ha-081084-m02_ha-081084.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084-m02:/home/docker/cp-test.txt ha-081084-m03:/home/docker/cp-test_ha-081084-m02_ha-081084-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m03 "sudo cat /home/docker/cp-test_ha-081084-m02_ha-081084-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084-m02:/home/docker/cp-test.txt ha-081084-m04:/home/docker/cp-test_ha-081084-m02_ha-081084-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m04 "sudo cat /home/docker/cp-test_ha-081084-m02_ha-081084-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp testdata/cp-test.txt ha-081084-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3049606023/001/cp-test_ha-081084-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084-m03:/home/docker/cp-test.txt ha-081084:/home/docker/cp-test_ha-081084-m03_ha-081084.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084 "sudo cat /home/docker/cp-test_ha-081084-m03_ha-081084.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084-m03:/home/docker/cp-test.txt ha-081084-m02:/home/docker/cp-test_ha-081084-m03_ha-081084-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m02 "sudo cat /home/docker/cp-test_ha-081084-m03_ha-081084-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084-m03:/home/docker/cp-test.txt ha-081084-m04:/home/docker/cp-test_ha-081084-m03_ha-081084-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m04 "sudo cat /home/docker/cp-test_ha-081084-m03_ha-081084-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp testdata/cp-test.txt ha-081084-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3049606023/001/cp-test_ha-081084-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084-m04:/home/docker/cp-test.txt ha-081084:/home/docker/cp-test_ha-081084-m04_ha-081084.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084 "sudo cat /home/docker/cp-test_ha-081084-m04_ha-081084.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084-m04:/home/docker/cp-test.txt ha-081084-m02:/home/docker/cp-test_ha-081084-m04_ha-081084-m02.txt
E0920 19:19:51.235789  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m02 "sudo cat /home/docker/cp-test_ha-081084-m04_ha-081084-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 cp ha-081084-m04:/home/docker/cp-test.txt ha-081084-m03:/home/docker/cp-test_ha-081084-m04_ha-081084-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 ssh -n ha-081084-m03 "sudo cat /home/docker/cp-test_ha-081084-m04_ha-081084-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-081084 node stop m02 -v=7 --alsologtostderr: (12.016291745s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-081084 status -v=7 --alsologtostderr: exit status 7 (766.231911ms)

                                                
                                                
-- stdout --
	ha-081084
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-081084-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-081084-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-081084-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:20:05.348402  639618 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:20:05.348605  639618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:20:05.348613  639618 out.go:358] Setting ErrFile to fd 2...
	I0920 19:20:05.348619  639618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:20:05.349000  639618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
	I0920 19:20:05.349216  639618 out.go:352] Setting JSON to false
	I0920 19:20:05.349252  639618 mustload.go:65] Loading cluster: ha-081084
	I0920 19:20:05.349379  639618 notify.go:220] Checking for updates...
	I0920 19:20:05.349690  639618 config.go:182] Loaded profile config "ha-081084": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:20:05.349704  639618 status.go:174] checking status of ha-081084 ...
	I0920 19:20:05.350289  639618 cli_runner.go:164] Run: docker container inspect ha-081084 --format={{.State.Status}}
	I0920 19:20:05.378759  639618 status.go:364] ha-081084 host status = "Running" (err=<nil>)
	I0920 19:20:05.378794  639618 host.go:66] Checking if "ha-081084" exists ...
	I0920 19:20:05.379168  639618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-081084
	I0920 19:20:05.405889  639618 host.go:66] Checking if "ha-081084" exists ...
	I0920 19:20:05.406199  639618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:20:05.406249  639618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-081084
	I0920 19:20:05.439782  639618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/ha-081084/id_rsa Username:docker}
	I0920 19:20:05.544444  639618 ssh_runner.go:195] Run: systemctl --version
	I0920 19:20:05.548908  639618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:20:05.560612  639618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:20:05.619652  639618 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-20 19:20:05.608561519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:20:05.620367  639618 kubeconfig.go:125] found "ha-081084" server: "https://192.168.49.254:8443"
	I0920 19:20:05.620407  639618 api_server.go:166] Checking apiserver status ...
	I0920 19:20:05.620454  639618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:20:05.632252  639618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1380/cgroup
	I0920 19:20:05.642158  639618 api_server.go:182] apiserver freezer: "7:freezer:/docker/fe04e5cafbc53bf7858dd4ef3ced7c9a955a0544085ce63a3adfa44521bcc4da/crio/crio-5c2eddb25238b04cdabaf41d5462f75fed7785c8aa0e4f4233e6626e9226243b"
	I0920 19:20:05.642230  639618 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fe04e5cafbc53bf7858dd4ef3ced7c9a955a0544085ce63a3adfa44521bcc4da/crio/crio-5c2eddb25238b04cdabaf41d5462f75fed7785c8aa0e4f4233e6626e9226243b/freezer.state
	I0920 19:20:05.651472  639618 api_server.go:204] freezer state: "THAWED"
	I0920 19:20:05.651502  639618 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 19:20:05.660914  639618 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 19:20:05.660949  639618 status.go:456] ha-081084 apiserver status = Running (err=<nil>)
	I0920 19:20:05.660966  639618 status.go:176] ha-081084 status: &{Name:ha-081084 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:20:05.660999  639618 status.go:174] checking status of ha-081084-m02 ...
	I0920 19:20:05.661325  639618 cli_runner.go:164] Run: docker container inspect ha-081084-m02 --format={{.State.Status}}
	I0920 19:20:05.677888  639618 status.go:364] ha-081084-m02 host status = "Stopped" (err=<nil>)
	I0920 19:20:05.677911  639618 status.go:377] host is not running, skipping remaining checks
	I0920 19:20:05.677919  639618 status.go:176] ha-081084-m02 status: &{Name:ha-081084-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:20:05.677945  639618 status.go:174] checking status of ha-081084-m03 ...
	I0920 19:20:05.678268  639618 cli_runner.go:164] Run: docker container inspect ha-081084-m03 --format={{.State.Status}}
	I0920 19:20:05.694174  639618 status.go:364] ha-081084-m03 host status = "Running" (err=<nil>)
	I0920 19:20:05.694200  639618 host.go:66] Checking if "ha-081084-m03" exists ...
	I0920 19:20:05.694581  639618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-081084-m03
	I0920 19:20:05.711555  639618 host.go:66] Checking if "ha-081084-m03" exists ...
	I0920 19:20:05.711882  639618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:20:05.711928  639618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-081084-m03
	I0920 19:20:05.730198  639618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/ha-081084-m03/id_rsa Username:docker}
	I0920 19:20:05.829028  639618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:20:05.841540  639618 kubeconfig.go:125] found "ha-081084" server: "https://192.168.49.254:8443"
	I0920 19:20:05.841579  639618 api_server.go:166] Checking apiserver status ...
	I0920 19:20:05.841654  639618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:20:05.852432  639618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1305/cgroup
	I0920 19:20:05.863332  639618 api_server.go:182] apiserver freezer: "7:freezer:/docker/895391b72b7bca5cbdd783b4bb9c1ea5a5f78b95193f0d95a8fdc05f42ccd9f4/crio/crio-7e710ab60b829547a01ef8ef88ed1ad04a7dbcbd3cfcf32bd8da845b8010df0d"
	I0920 19:20:05.863418  639618 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/895391b72b7bca5cbdd783b4bb9c1ea5a5f78b95193f0d95a8fdc05f42ccd9f4/crio/crio-7e710ab60b829547a01ef8ef88ed1ad04a7dbcbd3cfcf32bd8da845b8010df0d/freezer.state
	I0920 19:20:05.873861  639618 api_server.go:204] freezer state: "THAWED"
	I0920 19:20:05.873895  639618 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 19:20:05.887336  639618 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 19:20:05.887369  639618 status.go:456] ha-081084-m03 apiserver status = Running (err=<nil>)
	I0920 19:20:05.887380  639618 status.go:176] ha-081084-m03 status: &{Name:ha-081084-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:20:05.887419  639618 status.go:174] checking status of ha-081084-m04 ...
	I0920 19:20:05.887747  639618 cli_runner.go:164] Run: docker container inspect ha-081084-m04 --format={{.State.Status}}
	I0920 19:20:05.907442  639618 status.go:364] ha-081084-m04 host status = "Running" (err=<nil>)
	I0920 19:20:05.907467  639618 host.go:66] Checking if "ha-081084-m04" exists ...
	I0920 19:20:05.907768  639618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-081084-m04
	I0920 19:20:05.926817  639618 host.go:66] Checking if "ha-081084-m04" exists ...
	I0920 19:20:05.927333  639618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:20:05.927403  639618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-081084-m04
	I0920 19:20:05.944481  639618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/ha-081084-m04/id_rsa Username:docker}
	I0920 19:20:06.044552  639618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:20:06.060398  639618 status.go:176] ha-081084-m04 status: &{Name:ha-081084-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 node start m02 -v=7 --alsologtostderr
E0920 19:20:11.719340  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-081084 node start m02 -v=7 --alsologtostderr: (20.313249331s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-081084 status -v=7 --alsologtostderr: (1.516421649s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.404860438s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (271.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-081084 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-081084 -v=7 --alsologtostderr
E0920 19:20:32.536885  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:20:52.681549  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:21:00.237831  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-081084 -v=7 --alsologtostderr: (37.495869759s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-081084 --wait=true -v=7 --alsologtostderr
E0920 19:22:14.603271  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:24:30.741010  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:24:58.444706  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-081084 --wait=true -v=7 --alsologtostderr: (3m53.528371451s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-081084
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (271.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-081084 node delete m03 -v=7 --alsologtostderr: (11.527685127s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 stop -v=7 --alsologtostderr
E0920 19:25:32.537278  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-081084 stop -v=7 --alsologtostderr: (35.601413792s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-081084 status -v=7 --alsologtostderr: exit status 7 (112.917672ms)

                                                
                                                
-- stdout --
	ha-081084
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-081084-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-081084-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:25:50.356350  654354 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:25:50.356704  654354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:50.356740  654354 out.go:358] Setting ErrFile to fd 2...
	I0920 19:25:50.356761  654354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:25:50.357021  654354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
	I0920 19:25:50.357254  654354 out.go:352] Setting JSON to false
	I0920 19:25:50.357336  654354 mustload.go:65] Loading cluster: ha-081084
	I0920 19:25:50.357404  654354 notify.go:220] Checking for updates...
	I0920 19:25:50.357842  654354 config.go:182] Loaded profile config "ha-081084": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:25:50.357893  654354 status.go:174] checking status of ha-081084 ...
	I0920 19:25:50.358483  654354 cli_runner.go:164] Run: docker container inspect ha-081084 --format={{.State.Status}}
	I0920 19:25:50.378152  654354 status.go:364] ha-081084 host status = "Stopped" (err=<nil>)
	I0920 19:25:50.378173  654354 status.go:377] host is not running, skipping remaining checks
	I0920 19:25:50.378180  654354 status.go:176] ha-081084 status: &{Name:ha-081084 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:25:50.378221  654354 status.go:174] checking status of ha-081084-m02 ...
	I0920 19:25:50.378581  654354 cli_runner.go:164] Run: docker container inspect ha-081084-m02 --format={{.State.Status}}
	I0920 19:25:50.404658  654354 status.go:364] ha-081084-m02 host status = "Stopped" (err=<nil>)
	I0920 19:25:50.404683  654354 status.go:377] host is not running, skipping remaining checks
	I0920 19:25:50.404692  654354 status.go:176] ha-081084-m02 status: &{Name:ha-081084-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:25:50.404732  654354 status.go:174] checking status of ha-081084-m04 ...
	I0920 19:25:50.405045  654354 cli_runner.go:164] Run: docker container inspect ha-081084-m04 --format={{.State.Status}}
	I0920 19:25:50.421233  654354 status.go:364] ha-081084-m04 host status = "Stopped" (err=<nil>)
	I0920 19:25:50.421258  654354 status.go:377] host is not running, skipping remaining checks
	I0920 19:25:50.421265  654354 status.go:176] ha-081084-m04 status: &{Name:ha-081084-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (124.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-081084 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-081084 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m3.89623845s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (124.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (69.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-081084 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-081084 --control-plane -v=7 --alsologtostderr: (1m8.713186799s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-081084 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-081084 status -v=7 --alsologtostderr: (1.054090504s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (69.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.01534776s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.29s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-634495 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0920 19:29:30.741578  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-634495 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (52.28660005s)
--- PASS: TestJSONOutput/start/Command (52.29s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-634495 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-634495 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-634495 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-634495 --output=json --user=testUser: (5.885448859s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-465466 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-465466 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.721468ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bc00f474-956d-4c47-ba6e-3e2cc8b6de68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-465466] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a2a69a5-8e1f-4a19-8dc5-e9447950518f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19679"}}
	{"specversion":"1.0","id":"cab073f2-c21d-45a8-9ed0-a019867d18ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"878205b8-df72-4a2d-a694-754d79dc9800","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig"}}
	{"specversion":"1.0","id":"0bd4b89d-f8c0-4432-9533-72b3d599a07d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube"}}
	{"specversion":"1.0","id":"2bc6ef29-f232-4c95-a0c5-3d06eeb05968","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"25d5d7b0-044e-49a4-a719-9cf354acbdb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1c62341a-42b3-4c11-839b-d13248beda84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-465466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-465466
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-422329 --network=
E0920 19:30:32.537297  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-422329 --network=: (35.711096236s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-422329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-422329
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-422329: (2.104329148s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.84s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.52s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-304741 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-304741 --network=bridge: (33.566767069s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-304741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-304741
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-304741: (1.926644251s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.52s)

                                                
                                    
x
+
TestKicExistingNetwork (31.51s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 19:31:33.031702  593105 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 19:31:33.047818  593105 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 19:31:33.047904  593105 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 19:31:33.047922  593105 cli_runner.go:164] Run: docker network inspect existing-network
W0920 19:31:33.064204  593105 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 19:31:33.064232  593105 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 19:31:33.064250  593105 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 19:31:33.067785  593105 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 19:31:33.086823  593105 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-43b24717c11e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:69:d1:01:cc} reservation:<nil>}
I0920 19:31:33.087800  593105 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400174f000}
I0920 19:31:33.087874  593105 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0920 19:31:33.087949  593105 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 19:31:33.161198  593105 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-235978 --network=existing-network
E0920 19:31:55.602014  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-235978 --network=existing-network: (29.345587864s)
helpers_test.go:175: Cleaning up "existing-network-235978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-235978
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-235978: (2.002284427s)
I0920 19:32:04.525604  593105 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.51s)

                                                
                                    
x
+
TestKicCustomSubnet (32.38s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-107367 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-107367 --subnet=192.168.60.0/24: (30.282492803s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-107367 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-107367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-107367
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-107367: (2.073942511s)
--- PASS: TestKicCustomSubnet (32.38s)

                                                
                                    
x
+
TestKicStaticIP (32.88s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-494838 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-494838 --static-ip=192.168.200.200: (30.681879101s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-494838 ip
helpers_test.go:175: Cleaning up "static-ip-494838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-494838
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-494838: (2.050777524s)
--- PASS: TestKicStaticIP (32.88s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-501355 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-501355 --driver=docker  --container-runtime=crio: (31.573562094s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-504120 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-504120 --driver=docker  --container-runtime=crio: (31.64064414s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-501355
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-504120
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-504120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-504120
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-504120: (1.952144005s)
helpers_test.go:175: Cleaning up "first-501355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-501355
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-501355: (1.916526129s)
--- PASS: TestMinikubeProfile (68.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-500510 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-500510 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.869750431s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-500510 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-502225 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0920 19:34:30.744342  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-502225 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.619739369s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-502225 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-500510 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-500510 --alsologtostderr -v=5: (1.635792084s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-502225 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-502225
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-502225: (1.201012306s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-502225
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-502225: (7.142422593s)
--- PASS: TestMountStart/serial/RestartStopped (8.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-502225 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-016298 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0920 19:35:32.537304  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:35:53.806189  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-016298 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m13.394312117s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-016298 -- rollout status deployment/busybox: (4.153221857s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- exec busybox-7dff88458-hp6hh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- exec busybox-7dff88458-k6rf6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- exec busybox-7dff88458-hp6hh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- exec busybox-7dff88458-k6rf6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- exec busybox-7dff88458-hp6hh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- exec busybox-7dff88458-k6rf6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- exec busybox-7dff88458-hp6hh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- exec busybox-7dff88458-hp6hh -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- exec busybox-7dff88458-k6rf6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-016298 -- exec busybox-7dff88458-k6rf6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-016298 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-016298 -v 3 --alsologtostderr: (56.858744623s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.55s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-016298 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 cp testdata/cp-test.txt multinode-016298:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 cp multinode-016298:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile788406297/001/cp-test_multinode-016298.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 cp multinode-016298:/home/docker/cp-test.txt multinode-016298-m02:/home/docker/cp-test_multinode-016298_multinode-016298-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298-m02 "sudo cat /home/docker/cp-test_multinode-016298_multinode-016298-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 cp multinode-016298:/home/docker/cp-test.txt multinode-016298-m03:/home/docker/cp-test_multinode-016298_multinode-016298-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298-m03 "sudo cat /home/docker/cp-test_multinode-016298_multinode-016298-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 cp testdata/cp-test.txt multinode-016298-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 cp multinode-016298-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile788406297/001/cp-test_multinode-016298-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 cp multinode-016298-m02:/home/docker/cp-test.txt multinode-016298:/home/docker/cp-test_multinode-016298-m02_multinode-016298.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298 "sudo cat /home/docker/cp-test_multinode-016298-m02_multinode-016298.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 cp multinode-016298-m02:/home/docker/cp-test.txt multinode-016298-m03:/home/docker/cp-test_multinode-016298-m02_multinode-016298-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298-m03 "sudo cat /home/docker/cp-test_multinode-016298-m02_multinode-016298-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 cp testdata/cp-test.txt multinode-016298-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 cp multinode-016298-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile788406297/001/cp-test_multinode-016298-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 cp multinode-016298-m03:/home/docker/cp-test.txt multinode-016298:/home/docker/cp-test_multinode-016298-m03_multinode-016298.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298 "sudo cat /home/docker/cp-test_multinode-016298-m03_multinode-016298.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 cp multinode-016298-m03:/home/docker/cp-test.txt multinode-016298-m02:/home/docker/cp-test_multinode-016298-m03_multinode-016298-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 ssh -n multinode-016298-m02 "sudo cat /home/docker/cp-test_multinode-016298-m03_multinode-016298-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-016298 node stop m03: (1.242987882s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-016298 status: exit status 7 (516.450203ms)

                                                
                                                
-- stdout --
	multinode-016298
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-016298-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-016298-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-016298 status --alsologtostderr: exit status 7 (548.723157ms)

                                                
                                                
-- stdout --
	multinode-016298
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-016298-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-016298-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:37:19.796946  708172 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:37:19.797356  708172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:37:19.797372  708172 out.go:358] Setting ErrFile to fd 2...
	I0920 19:37:19.797379  708172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:37:19.797681  708172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
	I0920 19:37:19.797893  708172 out.go:352] Setting JSON to false
	I0920 19:37:19.797951  708172 mustload.go:65] Loading cluster: multinode-016298
	I0920 19:37:19.798039  708172 notify.go:220] Checking for updates...
	I0920 19:37:19.799376  708172 config.go:182] Loaded profile config "multinode-016298": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:37:19.799410  708172 status.go:174] checking status of multinode-016298 ...
	I0920 19:37:19.800309  708172 cli_runner.go:164] Run: docker container inspect multinode-016298 --format={{.State.Status}}
	I0920 19:37:19.817775  708172 status.go:364] multinode-016298 host status = "Running" (err=<nil>)
	I0920 19:37:19.817802  708172 host.go:66] Checking if "multinode-016298" exists ...
	I0920 19:37:19.818118  708172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-016298
	I0920 19:37:19.836422  708172 host.go:66] Checking if "multinode-016298" exists ...
	I0920 19:37:19.836949  708172 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:37:19.837013  708172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-016298
	I0920 19:37:19.869594  708172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/multinode-016298/id_rsa Username:docker}
	I0920 19:37:19.968363  708172 ssh_runner.go:195] Run: systemctl --version
	I0920 19:37:19.972933  708172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:37:19.984893  708172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:37:20.056701  708172 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-20 19:37:20.04553358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:37:20.057308  708172 kubeconfig.go:125] found "multinode-016298" server: "https://192.168.67.2:8443"
	I0920 19:37:20.057345  708172 api_server.go:166] Checking apiserver status ...
	I0920 19:37:20.057401  708172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:37:20.069106  708172 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1398/cgroup
	I0920 19:37:20.079031  708172 api_server.go:182] apiserver freezer: "7:freezer:/docker/9ba9b1941e581c0a858864ca94537be6dec46505f79d5b07c03c22683851c171/crio/crio-ced1a2210573b7af3b3d3fbf52183ff464a4dfb145c03940d384d16e0d5fd218"
	I0920 19:37:20.079120  708172 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9ba9b1941e581c0a858864ca94537be6dec46505f79d5b07c03c22683851c171/crio/crio-ced1a2210573b7af3b3d3fbf52183ff464a4dfb145c03940d384d16e0d5fd218/freezer.state
	I0920 19:37:20.088587  708172 api_server.go:204] freezer state: "THAWED"
	I0920 19:37:20.088619  708172 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0920 19:37:20.096854  708172 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0920 19:37:20.096888  708172 status.go:456] multinode-016298 apiserver status = Running (err=<nil>)
	I0920 19:37:20.096900  708172 status.go:176] multinode-016298 status: &{Name:multinode-016298 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:37:20.096920  708172 status.go:174] checking status of multinode-016298-m02 ...
	I0920 19:37:20.097276  708172 cli_runner.go:164] Run: docker container inspect multinode-016298-m02 --format={{.State.Status}}
	I0920 19:37:20.116888  708172 status.go:364] multinode-016298-m02 host status = "Running" (err=<nil>)
	I0920 19:37:20.116917  708172 host.go:66] Checking if "multinode-016298-m02" exists ...
	I0920 19:37:20.117334  708172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-016298-m02
	I0920 19:37:20.136480  708172 host.go:66] Checking if "multinode-016298-m02" exists ...
	I0920 19:37:20.136883  708172 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:37:20.136940  708172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-016298-m02
	I0920 19:37:20.156862  708172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/19679-586329/.minikube/machines/multinode-016298-m02/id_rsa Username:docker}
	I0920 19:37:20.260047  708172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:37:20.271939  708172 status.go:176] multinode-016298-m02 status: &{Name:multinode-016298-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:37:20.271996  708172 status.go:174] checking status of multinode-016298-m03 ...
	I0920 19:37:20.272294  708172 cli_runner.go:164] Run: docker container inspect multinode-016298-m03 --format={{.State.Status}}
	I0920 19:37:20.290636  708172 status.go:364] multinode-016298-m03 host status = "Stopped" (err=<nil>)
	I0920 19:37:20.290659  708172 status.go:377] host is not running, skipping remaining checks
	I0920 19:37:20.290667  708172 status.go:176] multinode-016298-m03 status: &{Name:multinode-016298-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-016298 node start m03 -v=7 --alsologtostderr: (8.963045802s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (103.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-016298
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-016298
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-016298: (24.854743824s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-016298 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-016298 --wait=true -v=8 --alsologtostderr: (1m18.12030411s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-016298
--- PASS: TestMultiNode/serial/RestartKeepsNodes (103.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-016298 node delete m03: (4.914787597s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 stop
E0920 19:39:30.744512  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-016298 stop: (23.762177259s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-016298 status: exit status 7 (96.204631ms)

                                                
                                                
-- stdout --
	multinode-016298
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-016298-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-016298 status --alsologtostderr: exit status 7 (96.668424ms)

                                                
                                                
-- stdout --
	multinode-016298
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-016298-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:39:42.710075  715987 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:39:42.710294  715987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:39:42.710325  715987 out.go:358] Setting ErrFile to fd 2...
	I0920 19:39:42.710345  715987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:39:42.710605  715987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
	I0920 19:39:42.710813  715987 out.go:352] Setting JSON to false
	I0920 19:39:42.710874  715987 mustload.go:65] Loading cluster: multinode-016298
	I0920 19:39:42.710971  715987 notify.go:220] Checking for updates...
	I0920 19:39:42.711361  715987 config.go:182] Loaded profile config "multinode-016298": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:39:42.711395  715987 status.go:174] checking status of multinode-016298 ...
	I0920 19:39:42.712228  715987 cli_runner.go:164] Run: docker container inspect multinode-016298 --format={{.State.Status}}
	I0920 19:39:42.730158  715987 status.go:364] multinode-016298 host status = "Stopped" (err=<nil>)
	I0920 19:39:42.730178  715987 status.go:377] host is not running, skipping remaining checks
	I0920 19:39:42.730185  715987 status.go:176] multinode-016298 status: &{Name:multinode-016298 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:39:42.730216  715987 status.go:174] checking status of multinode-016298-m02 ...
	I0920 19:39:42.730515  715987 cli_runner.go:164] Run: docker container inspect multinode-016298-m02 --format={{.State.Status}}
	I0920 19:39:42.761703  715987 status.go:364] multinode-016298-m02 host status = "Stopped" (err=<nil>)
	I0920 19:39:42.761774  715987 status.go:377] host is not running, skipping remaining checks
	I0920 19:39:42.761796  715987 status.go:176] multinode-016298-m02 status: &{Name:multinode-016298-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-016298 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0920 19:40:32.536827  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-016298 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (52.533274361s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-016298 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-016298
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-016298-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-016298-m02 --driver=docker  --container-runtime=crio: exit status 14 (98.160318ms)

                                                
                                                
-- stdout --
	* [multinode-016298-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-016298-m02' is duplicated with machine name 'multinode-016298-m02' in profile 'multinode-016298'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-016298-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-016298-m03 --driver=docker  --container-runtime=crio: (32.430045745s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-016298
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-016298: exit status 80 (319.757141ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-016298 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-016298-m03 already exists in multinode-016298-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-016298-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-016298-m03: (1.973589166s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.88s)

                                                
                                    
x
+
TestPreload (127.7s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-519292 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-519292 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m34.840021489s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-519292 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-519292 image pull gcr.io/k8s-minikube/busybox: (3.31866574s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-519292
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-519292: (5.753187519s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-519292 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-519292 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.06369856s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-519292 image list
helpers_test.go:175: Cleaning up "test-preload-519292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-519292
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-519292: (2.394252055s)
--- PASS: TestPreload (127.70s)

                                                
                                    
x
+
TestScheduledStopUnix (109.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-723353 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-723353 --memory=2048 --driver=docker  --container-runtime=crio: (32.527557101s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-723353 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-723353 -n scheduled-stop-723353
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-723353 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 19:43:55.630597  593105 retry.go:31] will retry after 118.191µs: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.631762  593105 retry.go:31] will retry after 201.438µs: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.636721  593105 retry.go:31] will retry after 248.154µs: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.637823  593105 retry.go:31] will retry after 382.286µs: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.638940  593105 retry.go:31] will retry after 440.593µs: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.640022  593105 retry.go:31] will retry after 386.953µs: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.641120  593105 retry.go:31] will retry after 1.489527ms: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.643308  593105 retry.go:31] will retry after 1.49906ms: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.645498  593105 retry.go:31] will retry after 1.666147ms: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.647690  593105 retry.go:31] will retry after 3.531767ms: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.651904  593105 retry.go:31] will retry after 3.053524ms: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.655068  593105 retry.go:31] will retry after 11.650838ms: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.667269  593105 retry.go:31] will retry after 13.491063ms: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.681573  593105 retry.go:31] will retry after 27.267068ms: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
I0920 19:43:55.709079  593105 retry.go:31] will retry after 28.293988ms: open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/scheduled-stop-723353/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-723353 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-723353 -n scheduled-stop-723353
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-723353
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-723353 --schedule 15s
E0920 19:44:30.744785  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-723353
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-723353: exit status 7 (69.704318ms)

                                                
                                                
-- stdout --
	scheduled-stop-723353
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-723353 -n scheduled-stop-723353
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-723353 -n scheduled-stop-723353: exit status 7 (71.356521ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-723353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-723353
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-723353: (4.964463895s)
--- PASS: TestScheduledStopUnix (109.01s)

                                                
                                    
x
+
TestInsufficientStorage (13.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-395494 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-395494 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.936514517s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ce85db9b-2642-44e9-b04b-77a6d1928cc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-395494] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"26db7f34-cfe0-4857-9179-8b0a111b4651","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19679"}}
	{"specversion":"1.0","id":"bc24b079-d655-4059-9a59-315fb1ddbf96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"79dc99d5-0db3-49dc-9eeb-b28506ce534c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig"}}
	{"specversion":"1.0","id":"a8f2d86a-68c3-486a-8a7b-96d6d35fc4d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube"}}
	{"specversion":"1.0","id":"162f501c-b012-48e2-b5f3-80fc2e7ff372","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"fffe54f6-49d0-433b-abec-8d89e949f23c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"34b9003b-abeb-448a-9c7b-32c76d018195","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"56da64e3-bfcc-4788-92b4-60a5580aea42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e581711d-b4b3-467e-88fd-3154a3f149b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d6d8703-fc5c-4da7-89d5-a3e1425bf1e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7b763532-9d07-40f7-94e8-b7704b96acde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-395494\" primary control-plane node in \"insufficient-storage-395494\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d3e3846-a3da-48a0-bdb9-6c9aa571cd59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e106e263-0d46-4293-b57b-b420dad52bb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ceb69a3d-2716-4a05-b6e5-33535b781843","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-395494 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-395494 --output=json --layout=cluster: exit status 7 (295.294702ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-395494","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-395494","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:45:22.837337  733399 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-395494" does not appear in /home/jenkins/minikube-integration/19679-586329/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-395494 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-395494 --output=json --layout=cluster: exit status 7 (296.70748ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-395494","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-395494","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 19:45:23.134729  733460 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-395494" does not appear in /home/jenkins/minikube-integration/19679-586329/kubeconfig
	E0920 19:45:23.145594  733460 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/insufficient-storage-395494/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-395494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-395494
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-395494: (1.919000353s)
--- PASS: TestInsufficientStorage (13.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (63.26s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2111649663 start -p running-upgrade-820600 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0920 19:49:30.741450  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2111649663 start -p running-upgrade-820600 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.300910267s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-820600 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-820600 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.070912262s)
helpers_test.go:175: Cleaning up "running-upgrade-820600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-820600
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-820600: (2.931231256s)
--- PASS: TestRunningBinaryUpgrade (63.26s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.85s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-098239 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-098239 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m15.888205106s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-098239
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-098239: (1.405622998s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-098239 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-098239 status --format={{.Host}}: exit status 7 (131.464161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-098239 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-098239 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m35.274210005s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-098239 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-098239 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-098239 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (115.256032ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-098239] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-098239
	    minikube start -p kubernetes-upgrade-098239 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0982392 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-098239 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-098239 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-098239 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.397592977s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-098239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-098239
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-098239: (2.518018256s)
--- PASS: TestKubernetesUpgrade (389.85s)

                                                
                                    
x
+
TestMissingContainerUpgrade (167.41s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3813239828 start -p missing-upgrade-908103 --memory=2200 --driver=docker  --container-runtime=crio
E0920 19:45:32.537284  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3813239828 start -p missing-upgrade-908103 --memory=2200 --driver=docker  --container-runtime=crio: (1m33.527376201s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-908103
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-908103: (10.442371501s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-908103
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-908103 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-908103 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (58.136032312s)
helpers_test.go:175: Cleaning up "missing-upgrade-908103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-908103
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-908103: (4.406634511s)
--- PASS: TestMissingContainerUpgrade (167.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-368918 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-368918 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (90.165446ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-368918] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-368918 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-368918 --driver=docker  --container-runtime=crio: (38.396598512s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-368918 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (14.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-368918 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-368918 --no-kubernetes --driver=docker  --container-runtime=crio: (12.750983727s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-368918 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-368918 status -o json: exit status 2 (302.931185ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-368918","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-368918
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-368918: (1.914629692s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (14.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-368918 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-368918 --no-kubernetes --driver=docker  --container-runtime=crio: (10.068883022s)
--- PASS: TestNoKubernetes/serial/Start (10.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-368918 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-368918 "sudo systemctl is-active --quiet service kubelet": exit status 1 (347.564489ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-368918
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-368918: (1.272305166s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-368918 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-368918 --driver=docker  --container-runtime=crio: (8.207562508s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-368918 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-368918 "sudo systemctl is-active --quiet service kubelet": exit status 1 (292.134529ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (70.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1162572625 start -p stopped-upgrade-500209 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0920 19:48:35.603320  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1162572625 start -p stopped-upgrade-500209 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.359082317s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1162572625 -p stopped-upgrade-500209 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1162572625 -p stopped-upgrade-500209 stop: (2.714040456s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-500209 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-500209 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.701163225s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (70.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-500209
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-500209: (1.230651129s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestPause/serial/Start (79.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-551416 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0920 19:50:32.537465  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-551416 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m19.677312695s)
--- PASS: TestPause/serial/Start (79.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.69s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-551416 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-551416 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.674491899s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.69s)

                                                
                                    
x
+
TestPause/serial/Pause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-551416 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-551416 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-551416 --output=json --layout=cluster: exit status 2 (405.014819ms)

                                                
                                                
-- stdout --
	{"Name":"pause-551416","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-551416","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-551416 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.92s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.21s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-551416 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-551416 --alsologtostderr -v=5: (1.212980343s)
--- PASS: TestPause/serial/PauseAgain (1.21s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.13s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-551416 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-551416 --alsologtostderr -v=5: (3.13297446s)
--- PASS: TestPause/serial/DeletePaused (3.13s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.78s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (4.716353107s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-551416
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-551416: exit status 1 (23.877427ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-551416: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (4.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-423177 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-423177 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (236.259988ms)

                                                
                                                
-- stdout --
	* [false-423177] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:53:13.249034  774070 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:53:13.249243  774070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:53:13.249271  774070 out.go:358] Setting ErrFile to fd 2...
	I0920 19:53:13.249292  774070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:53:13.249668  774070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-586329/.minikube/bin
	I0920 19:53:13.250178  774070 out.go:352] Setting JSON to false
	I0920 19:53:13.251146  774070 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12944,"bootTime":1726849050,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0920 19:53:13.251310  774070 start.go:139] virtualization:  
	I0920 19:53:13.264552  774070 out.go:177] * [false-423177] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:53:13.267246  774070 notify.go:220] Checking for updates...
	I0920 19:53:13.267215  774070 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 19:53:13.270085  774070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:53:13.272387  774070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-586329/kubeconfig
	I0920 19:53:13.276131  774070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-586329/.minikube
	I0920 19:53:13.278854  774070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:53:13.281101  774070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:53:13.284053  774070 config.go:182] Loaded profile config "force-systemd-env-595877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 19:53:13.284227  774070 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:53:13.323430  774070 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:53:13.323595  774070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:53:13.411945  774070 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:59 SystemTime:2024-09-20 19:53:13.40058331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:53:13.412050  774070 docker.go:318] overlay module found
	I0920 19:53:13.414625  774070 out.go:177] * Using the docker driver based on user configuration
	I0920 19:53:13.417015  774070 start.go:297] selected driver: docker
	I0920 19:53:13.417041  774070 start.go:901] validating driver "docker" against <nil>
	I0920 19:53:13.417056  774070 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:53:13.419207  774070 out.go:201] 
	W0920 19:53:13.421469  774070 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0920 19:53:13.423352  774070 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-423177 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-423177

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-423177

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-423177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-423177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-423177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-423177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-423177

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-423177

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-423177

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-423177

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-423177

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-423177" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-423177" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-423177

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-423177"

                                                
                                                
----------------------- debugLogs end: false-423177 [took: 4.807920576s] --------------------------------
helpers_test.go:175: Cleaning up "false-423177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-423177
--- PASS: TestNetworkPlugins/group/false (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (192.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-714813 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0920 19:55:32.537188  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-714813 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m12.250964596s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (192.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-347913 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-347913 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m3.497525599s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-714813 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dce6604d-b1c7-4f2f-98c2-a830ff309d07] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dce6604d-b1c7-4f2f-98c2-a830ff309d07] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003843499s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-714813 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-714813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-714813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.494145073s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-714813 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-714813 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-714813 --alsologtostderr -v=3: (12.619021429s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-714813 -n old-k8s-version-714813
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-714813 -n old-k8s-version-714813: exit status 7 (138.645231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-714813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (130.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-714813 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-714813 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m9.894896491s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-714813 -n old-k8s-version-714813
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (130.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-347913 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [910d8d16-d849-4eb8-b859-5d71d0188d55] Pending
helpers_test.go:344: "busybox" [910d8d16-d849-4eb8-b859-5d71d0188d55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [910d8d16-d849-4eb8-b859-5d71d0188d55] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003867251s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-347913 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-347913 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-347913 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.205066258s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-347913 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-347913 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-347913 --alsologtostderr -v=3: (12.041290046s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-347913 -n no-preload-347913
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-347913 -n no-preload-347913: exit status 7 (73.160585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-347913 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (280.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-347913 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 19:59:30.741091  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-347913 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m40.13475141s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-347913 -n no-preload-347913
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (280.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-k96kw" [835adf80-5d59-496b-97b6-e67a73901425] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005911986s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-k96kw" [835adf80-5d59-496b-97b6-e67a73901425] Running
E0920 20:00:32.537110  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005121353s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-714813 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-714813 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-714813 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-714813 -n old-k8s-version-714813
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-714813 -n old-k8s-version-714813: exit status 2 (335.057432ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-714813 -n old-k8s-version-714813
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-714813 -n old-k8s-version-714813: exit status 2 (324.04796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-714813 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-714813 -n old-k8s-version-714813
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-714813 -n old-k8s-version-714813
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-094074 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-094074 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m17.691073929s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-094074 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [31391146-2bf6-409a-ad80-0dbd15249db6] Pending
helpers_test.go:344: "busybox" [31391146-2bf6-409a-ad80-0dbd15249db6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [31391146-2bf6-409a-ad80-0dbd15249db6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003222628s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-094074 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-094074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-094074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.03782684s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-094074 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-094074 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-094074 --alsologtostderr -v=3: (11.975715282s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-094074 -n embed-certs-094074
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-094074 -n embed-certs-094074: exit status 7 (65.751391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-094074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-094074 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 20:02:45.456344  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:02:45.463544  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:02:45.474937  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:02:45.496283  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:02:45.537626  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:02:45.619051  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:02:45.780840  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:02:46.102960  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:02:46.744561  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:02:48.026505  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:02:50.587936  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:02:55.710209  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:03:05.952322  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:03:26.434399  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-094074 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m26.432125812s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-094074 -n embed-certs-094074
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-b8j59" [4d12918e-26bd-4fd8-ab70-35be7b2c10da] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003247847s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-b8j59" [4d12918e-26bd-4fd8-ab70-35be7b2c10da] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004559701s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-347913 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-347913 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-347913 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-347913 -n no-preload-347913
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-347913 -n no-preload-347913: exit status 2 (361.78447ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-347913 -n no-preload-347913
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-347913 -n no-preload-347913: exit status 2 (322.802284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-347913 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-347913 -n no-preload-347913
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-347913 -n no-preload-347913
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-870894 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 20:04:07.395966  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:04:30.741132  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-870894 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m17.792000681s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-870894 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5c269c67-f37b-4b34-a435-51d49d18d7cd] Pending
helpers_test.go:344: "busybox" [5c269c67-f37b-4b34-a435-51d49d18d7cd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5c269c67-f37b-4b34-a435-51d49d18d7cd] Running
E0920 20:05:15.605410  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003902399s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-870894 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-870894 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-870894 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-870894 --alsologtostderr -v=3
E0920 20:05:29.317685  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-870894 --alsologtostderr -v=3: (11.992453116s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-870894 -n default-k8s-diff-port-870894
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-870894 -n default-k8s-diff-port-870894: exit status 7 (74.076068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-870894 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (277.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-870894 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 20:05:32.536707  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-870894 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m36.659610449s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-870894 -n default-k8s-diff-port-870894
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (277.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xl22n" [9fa3d91b-590e-4f40-9c1e-0e3d5cc23fdd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004523332s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xl22n" [9fa3d91b-590e-4f40-9c1e-0e3d5cc23fdd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003726465s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-094074 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-094074 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-094074 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-094074 -n embed-certs-094074
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-094074 -n embed-certs-094074: exit status 2 (335.821476ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-094074 -n embed-certs-094074
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-094074 -n embed-certs-094074: exit status 2 (326.554331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-094074 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-094074 -n embed-certs-094074
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-094074 -n embed-certs-094074
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-165864 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-165864 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (34.416304771s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-165864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-165864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.05239654s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-165864 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-165864 --alsologtostderr -v=3: (1.249158764s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-165864 -n newest-cni-165864
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-165864 -n newest-cni-165864: exit status 7 (75.619432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-165864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-165864 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 20:07:45.456500  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-165864 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (15.595684198s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-165864 -n newest-cni-165864
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-165864 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-165864 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-165864 -n newest-cni-165864
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-165864 -n newest-cni-165864: exit status 2 (321.123131ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-165864 -n newest-cni-165864
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-165864 -n newest-cni-165864: exit status 2 (325.563904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-165864 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-165864 -n newest-cni-165864
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-165864 -n newest-cni-165864
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0920 20:08:13.159110  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:08:28.674286  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:08:28.680619  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:08:28.691963  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:08:28.713310  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:08:28.754642  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:08:28.836016  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:08:28.997505  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:08:29.319243  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:08:29.960661  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:08:31.242434  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:08:33.804238  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:08:38.925805  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:08:49.167364  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:09:09.648815  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:09:13.809730  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m20.368817101s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-423177 "pgrep -a kubelet"
I0920 20:09:26.381338  593105 config.go:182] Loaded profile config "auto-423177": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-423177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-np7p4" [99662b16-b28b-4121-a6e0-c15b209e2384] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 20:09:30.740728  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-np7p4" [99662b16-b28b-4121-a6e0-c15b209e2384] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004146228s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-423177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m19.685455563s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sqx55" [fe9648ef-7fcd-49d8-8a3b-73b29bb6f605] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004040765s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sqx55" [fe9648ef-7fcd-49d8-8a3b-73b29bb6f605] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003453357s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-870894 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-870894 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-870894 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-870894 --alsologtostderr -v=1: (1.312757441s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-870894 -n default-k8s-diff-port-870894
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-870894 -n default-k8s-diff-port-870894: exit status 2 (506.583894ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-870894 -n default-k8s-diff-port-870894
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-870894 -n default-k8s-diff-port-870894: exit status 2 (482.164546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-870894 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-870894 --alsologtostderr -v=1: (1.073106905s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-870894 -n default-k8s-diff-port-870894
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-870894 -n default-k8s-diff-port-870894
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.45s)
E0920 20:15:18.841840  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/default-k8s-diff-port-870894/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0920 20:10:32.536304  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/addons-060912/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:11:12.533178  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.968812991s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-htr9c" [2767070a-111e-496d-997b-5e2c94f33826] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003931745s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-423177 "pgrep -a kubelet"
I0920 20:11:24.077341  593105 config.go:182] Loaded profile config "kindnet-423177": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-423177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p6ckh" [febd4888-abe3-4b03-9cef-c09fa0df8e02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p6ckh" [febd4888-abe3-4b03-9cef-c09fa0df8e02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004263077s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4vwjb" [41c3e907-8b82-496a-af2b-1b5e2db56b63] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004450889s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-423177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-423177 "pgrep -a kubelet"
I0920 20:11:38.663227  593105 config.go:182] Loaded profile config "calico-423177": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-423177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jg848" [4742b04c-a3a4-40ec-be1a-fb953d7342db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jg848" [4742b04c-a3a4-40ec-be1a-fb953d7342db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005600682s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-423177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.572421246s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (53.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0920 20:12:45.456464  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/old-k8s-version-714813/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (53.638058959s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (53.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-423177 "pgrep -a kubelet"
I0920 20:13:03.544750  593105 config.go:182] Loaded profile config "custom-flannel-423177": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-423177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-65jdm" [bf5515c7-ab3a-48ad-8929-a3b587ea6455] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-65jdm" [bf5515c7-ab3a-48ad-8929-a3b587ea6455] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005363568s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-423177 "pgrep -a kubelet"
I0920 20:13:10.524405  593105 config.go:182] Loaded profile config "enable-default-cni-423177": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-423177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z9xng" [3d4659e0-a506-4668-99b2-da86c338b23f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z9xng" [3d4659e0-a506-4668-99b2-da86c338b23f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004989833s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-423177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-423177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.638283041s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (56.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0920 20:13:56.375682  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/no-preload-347913/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:14:26.644589  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/auto-423177/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:14:26.650917  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/auto-423177/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:14:26.662225  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/auto-423177/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:14:26.683591  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/auto-423177/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:14:26.725183  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/auto-423177/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:14:26.807231  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/auto-423177/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:14:26.968593  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/auto-423177/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:14:27.289881  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/auto-423177/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:14:27.931831  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/auto-423177/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:14:29.213155  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/auto-423177/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:14:30.741304  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/functional-345223/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:14:31.775269  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/auto-423177/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:14:36.897054  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/auto-423177/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-423177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (56.281734647s)
--- PASS: TestNetworkPlugins/group/bridge/Start (56.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4ktsg" [9449dee5-990f-4344-9f2b-2ef8f51b6669] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004411453s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-423177 "pgrep -a kubelet"
I0920 20:14:43.369016  593105 config.go:182] Loaded profile config "bridge-423177": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-423177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6nz77" [6a3cfbbc-b363-4540-a419-75aba2d7eadb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 20:14:47.138632  593105 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/auto-423177/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-6nz77" [6a3cfbbc-b363-4540-a419-75aba2d7eadb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004476901s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-423177 "pgrep -a kubelet"
I0920 20:14:48.043833  593105 config.go:182] Loaded profile config "flannel-423177": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-423177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-czr62" [abd27019-43e7-4759-af52-dfb6656b9324] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-czr62" [abd27019-43e7-4759-af52-dfb6656b9324] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004019148s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-423177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-423177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-423177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    

Test skip (29/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-266880 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-266880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-266880
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-252566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-252566
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-423177 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-423177

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-423177

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-423177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-423177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-423177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-423177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-423177

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-423177

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-423177

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-423177

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-423177

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-423177" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-423177" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19679-586329/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 19:52:43 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-098239
contexts:
- context:
cluster: kubernetes-upgrade-098239
extensions:
- extension:
last-update: Fri, 20 Sep 2024 19:52:43 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-098239
name: kubernetes-upgrade-098239
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-098239
user:
client-certificate: /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/kubernetes-upgrade-098239/client.crt
client-key: /home/jenkins/minikube-integration/19679-586329/.minikube/profiles/kubernetes-upgrade-098239/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-423177

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-423177"

                                                
                                                
----------------------- debugLogs end: kubenet-423177 [took: 4.425982394s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-423177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-423177
--- SKIP: TestNetworkPlugins/group/kubenet (4.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-423177 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-423177" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-423177

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-423177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-423177"

                                                
                                                
----------------------- debugLogs end: cilium-423177 [took: 5.269471725s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-423177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-423177
--- SKIP: TestNetworkPlugins/group/cilium (5.45s)

                                                
                                    
Copied to clipboard