Test Report: Docker_Linux_crio_arm64 19734

                    
                      795b96072c2ea51545c2bdfc984dcdf8fe273799:2024-09-30:36435
                    
                

Test fail (3/327)

Order failed test Duration
33 TestAddons/parallel/Registry 73.78
34 TestAddons/parallel/Ingress 151.32
36 TestAddons/parallel/MetricsServer 331.88
x
+
TestAddons/parallel/Registry (73.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.838201ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-zx9j9" [a2779ea5-90ce-41c6-800a-4fd0e62455e1] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005426944s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nxhd5" [78962db4-c230-431b-b141-405fd6389146] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003597914s
addons_test.go:338: (dbg) Run:  kubectl --context addons-718366 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-718366 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-718366 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.106985686s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-718366 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 ip
2024/09/30 10:44:04 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-718366
helpers_test.go:235: (dbg) docker inspect addons-718366:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894",
	        "Created": "2024-09-30T10:31:43.905448896Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 576683,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-30T10:31:44.063796451Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894/hosts",
	        "LogPath": "/var/lib/docker/containers/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894-json.log",
	        "Name": "/addons-718366",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-718366:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-718366",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b0a01dc9769f647e2fa9445d5bf4e9ab2e1115d9e0e5acff67a45091631ce64-init/diff:/var/lib/docker/overlay2/89114fb86e05dfc705528dc965d39dcbdae2b3c32ee9939bb163740716767303/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b0a01dc9769f647e2fa9445d5bf4e9ab2e1115d9e0e5acff67a45091631ce64/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b0a01dc9769f647e2fa9445d5bf4e9ab2e1115d9e0e5acff67a45091631ce64/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b0a01dc9769f647e2fa9445d5bf4e9ab2e1115d9e0e5acff67a45091631ce64/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-718366",
	                "Source": "/var/lib/docker/volumes/addons-718366/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-718366",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-718366",
	                "name.minikube.sigs.k8s.io": "addons-718366",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4000b12ceab08239f17e20c17eb46f041a0a6e684a414119cdec0d3429928e0b",
	            "SandboxKey": "/var/run/docker/netns/4000b12ceab0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38988"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38989"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38990"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38991"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-718366": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "49bb2287327a5d5bf19993c7fe6d9348c5cc91efc29c195f3a50d6290c89924e",
	                    "EndpointID": "a3d75320f00be0ed0cbab5bc16e3263619548cfeae3e76a58471414489bf0190",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-718366",
	                        "ed341e1151f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-718366 -n addons-718366
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-718366 logs -n 25: (1.661196393s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-032798   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | -p download-only-032798              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| delete  | -p download-only-032798              | download-only-032798   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| start   | -o=json --download-only              | download-only-575153   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | -p download-only-575153              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| delete  | -p download-only-575153              | download-only-575153   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| delete  | -p download-only-032798              | download-only-032798   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| delete  | -p download-only-575153              | download-only-575153   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| start   | --download-only -p                   | download-docker-121895 | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | download-docker-121895               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-121895            | download-docker-121895 | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| start   | --download-only -p                   | binary-mirror-919874   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | binary-mirror-919874                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44655               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-919874              | binary-mirror-919874   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| addons  | enable dashboard -p                  | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | addons-718366                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | addons-718366                        |                        |         |         |                     |                     |
	| start   | -p addons-718366 --wait=true         | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:34 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:42 UTC | 30 Sep 24 10:42 UTC |
	|         | -p addons-718366                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-718366 addons disable         | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:43 UTC | 30 Sep 24 10:43 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-718366 addons                 | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:43 UTC | 30 Sep 24 10:43 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-718366 addons                 | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:43 UTC | 30 Sep 24 10:43 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-718366 ip                     | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:44 UTC | 30 Sep 24 10:44 UTC |
	| addons  | addons-718366 addons disable         | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:44 UTC | 30 Sep 24 10:44 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:31:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:31:19.588253  576188 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:31:19.588435  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:31:19.588464  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:31:19.588483  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:31:19.588757  576188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
	I0930 10:31:19.589326  576188 out.go:352] Setting JSON to false
	I0930 10:31:19.590293  576188 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":123226,"bootTime":1727569054,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0930 10:31:19.590400  576188 start.go:139] virtualization:  
	I0930 10:31:19.592475  576188 out.go:177] * [addons-718366] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 10:31:19.593683  576188 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:31:19.593737  576188 notify.go:220] Checking for updates...
	I0930 10:31:19.596014  576188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:31:19.597688  576188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	I0930 10:31:19.598789  576188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	I0930 10:31:19.600169  576188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 10:31:19.601274  576188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:31:19.602931  576188 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:31:19.624953  576188 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:31:19.625081  576188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:31:19.686322  576188 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-30 10:31:19.676149404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:31:19.686454  576188 docker.go:318] overlay module found
	I0930 10:31:19.688493  576188 out.go:177] * Using the docker driver based on user configuration
	I0930 10:31:19.689696  576188 start.go:297] selected driver: docker
	I0930 10:31:19.689712  576188 start.go:901] validating driver "docker" against <nil>
	I0930 10:31:19.689727  576188 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:31:19.690364  576188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:31:19.737739  576188 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-30 10:31:19.72812774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:31:19.737977  576188 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:31:19.738212  576188 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:31:19.739656  576188 out.go:177] * Using Docker driver with root privileges
	I0930 10:31:19.740990  576188 cni.go:84] Creating CNI manager for ""
	I0930 10:31:19.741052  576188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0930 10:31:19.741072  576188 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 10:31:19.741162  576188 start.go:340] cluster config:
	{Name:addons-718366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:31:19.743023  576188 out.go:177] * Starting "addons-718366" primary control-plane node in "addons-718366" cluster
	I0930 10:31:19.743990  576188 cache.go:121] Beginning downloading kic base image for docker with crio
	I0930 10:31:19.745206  576188 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0930 10:31:19.746898  576188 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 10:31:19.746949  576188 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0930 10:31:19.746962  576188 cache.go:56] Caching tarball of preloaded images
	I0930 10:31:19.747074  576188 preload.go:172] Found /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0930 10:31:19.747089  576188 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 10:31:19.747446  576188 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/config.json ...
	I0930 10:31:19.747510  576188 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0930 10:31:19.747474  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/config.json: {Name:mk2af656d2be7cf8581e9e41a4766db590e98cab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:19.763017  576188 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0930 10:31:19.763137  576188 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0930 10:31:19.763167  576188 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0930 10:31:19.763175  576188 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0930 10:31:19.763182  576188 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0930 10:31:19.763188  576188 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0930 10:31:36.606388  576188 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0930 10:31:36.606431  576188 cache.go:194] Successfully downloaded all kic artifacts
	I0930 10:31:36.606473  576188 start.go:360] acquireMachinesLock for addons-718366: {Name:mkcc9f52048bcb539eb2c19ba8edac315f37b684 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 10:31:36.606610  576188 start.go:364] duration metric: took 113.425µs to acquireMachinesLock for "addons-718366"
	I0930 10:31:36.606640  576188 start.go:93] Provisioning new machine with config: &{Name:addons-718366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 10:31:36.606722  576188 start.go:125] createHost starting for "" (driver="docker")
	I0930 10:31:36.609505  576188 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0930 10:31:36.609800  576188 start.go:159] libmachine.API.Create for "addons-718366" (driver="docker")
	I0930 10:31:36.609842  576188 client.go:168] LocalClient.Create starting
	I0930 10:31:36.609960  576188 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem
	I0930 10:31:36.990982  576188 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/cert.pem
	I0930 10:31:37.632250  576188 cli_runner.go:164] Run: docker network inspect addons-718366 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0930 10:31:37.647997  576188 cli_runner.go:211] docker network inspect addons-718366 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0930 10:31:37.648087  576188 network_create.go:284] running [docker network inspect addons-718366] to gather additional debugging logs...
	I0930 10:31:37.648108  576188 cli_runner.go:164] Run: docker network inspect addons-718366
	W0930 10:31:37.666472  576188 cli_runner.go:211] docker network inspect addons-718366 returned with exit code 1
	I0930 10:31:37.666507  576188 network_create.go:287] error running [docker network inspect addons-718366]: docker network inspect addons-718366: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-718366 not found
	I0930 10:31:37.666521  576188 network_create.go:289] output of [docker network inspect addons-718366]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-718366 not found
	
	** /stderr **
	I0930 10:31:37.666652  576188 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 10:31:37.682855  576188 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017b0f20}
	I0930 10:31:37.682901  576188 network_create.go:124] attempt to create docker network addons-718366 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0930 10:31:37.682963  576188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-718366 addons-718366
	I0930 10:31:37.753006  576188 network_create.go:108] docker network addons-718366 192.168.49.0/24 created
	I0930 10:31:37.753040  576188 kic.go:121] calculated static IP "192.168.49.2" for the "addons-718366" container
	I0930 10:31:37.753117  576188 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0930 10:31:37.768087  576188 cli_runner.go:164] Run: docker volume create addons-718366 --label name.minikube.sigs.k8s.io=addons-718366 --label created_by.minikube.sigs.k8s.io=true
	I0930 10:31:37.784157  576188 oci.go:103] Successfully created a docker volume addons-718366
	I0930 10:31:37.784245  576188 cli_runner.go:164] Run: docker run --rm --name addons-718366-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718366 --entrypoint /usr/bin/test -v addons-718366:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0930 10:31:39.859396  576188 cli_runner.go:217] Completed: docker run --rm --name addons-718366-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718366 --entrypoint /usr/bin/test -v addons-718366:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (2.075110378s)
	I0930 10:31:39.859424  576188 oci.go:107] Successfully prepared a docker volume addons-718366
	I0930 10:31:39.859448  576188 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 10:31:39.859467  576188 kic.go:194] Starting extracting preloaded images to volume ...
	I0930 10:31:39.859530  576188 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-718366:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0930 10:31:43.835757  576188 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-718366:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (3.97617046s)
	I0930 10:31:43.835789  576188 kic.go:203] duration metric: took 3.976319306s to extract preloaded images to volume ...
	W0930 10:31:43.835943  576188 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0930 10:31:43.836061  576188 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0930 10:31:43.891196  576188 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-718366 --name addons-718366 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718366 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-718366 --network addons-718366 --ip 192.168.49.2 --volume addons-718366:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0930 10:31:44.248245  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Running}}
	I0930 10:31:44.274600  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:31:44.306529  576188 cli_runner.go:164] Run: docker exec addons-718366 stat /var/lib/dpkg/alternatives/iptables
	I0930 10:31:44.359444  576188 oci.go:144] the created container "addons-718366" has a running status.
	I0930 10:31:44.359471  576188 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa...
	I0930 10:31:44.997180  576188 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0930 10:31:45.033020  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:31:45.054795  576188 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0930 10:31:45.054823  576188 kic_runner.go:114] Args: [docker exec --privileged addons-718366 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0930 10:31:45.150433  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:31:45.178099  576188 machine.go:93] provisionDockerMachine start ...
	I0930 10:31:45.178219  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:45.203008  576188 main.go:141] libmachine: Using SSH client type: native
	I0930 10:31:45.203294  576188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 38988 <nil> <nil>}
	I0930 10:31:45.203305  576188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 10:31:45.341698  576188 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-718366
	
	I0930 10:31:45.341727  576188 ubuntu.go:169] provisioning hostname "addons-718366"
	I0930 10:31:45.341795  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:45.364079  576188 main.go:141] libmachine: Using SSH client type: native
	I0930 10:31:45.364321  576188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 38988 <nil> <nil>}
	I0930 10:31:45.364339  576188 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-718366 && echo "addons-718366" | sudo tee /etc/hostname
	I0930 10:31:45.513605  576188 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-718366
	
	I0930 10:31:45.513697  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:45.531270  576188 main.go:141] libmachine: Using SSH client type: native
	I0930 10:31:45.531519  576188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 38988 <nil> <nil>}
	I0930 10:31:45.531542  576188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-718366' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-718366/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-718366' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 10:31:45.657393  576188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 10:31:45.657421  576188 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19734-570035/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-570035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-570035/.minikube}
	I0930 10:31:45.657449  576188 ubuntu.go:177] setting up certificates
	I0930 10:31:45.657461  576188 provision.go:84] configureAuth start
	I0930 10:31:45.657532  576188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718366
	I0930 10:31:45.674066  576188 provision.go:143] copyHostCerts
	I0930 10:31:45.674149  576188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-570035/.minikube/ca.pem (1078 bytes)
	I0930 10:31:45.674271  576188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-570035/.minikube/cert.pem (1123 bytes)
	I0930 10:31:45.674342  576188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-570035/.minikube/key.pem (1679 bytes)
	I0930 10:31:45.674396  576188 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-570035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca-key.pem org=jenkins.addons-718366 san=[127.0.0.1 192.168.49.2 addons-718366 localhost minikube]
	I0930 10:31:45.981328  576188 provision.go:177] copyRemoteCerts
	I0930 10:31:45.981423  576188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 10:31:45.981472  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:45.997951  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.090693  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 10:31:46.116251  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 10:31:46.141025  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0930 10:31:46.166818  576188 provision.go:87] duration metric: took 509.328593ms to configureAuth
	I0930 10:31:46.166888  576188 ubuntu.go:193] setting minikube options for container-runtime
	I0930 10:31:46.167109  576188 config.go:182] Loaded profile config "addons-718366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:31:46.167220  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.183793  576188 main.go:141] libmachine: Using SSH client type: native
	I0930 10:31:46.184047  576188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 38988 <nil> <nil>}
	I0930 10:31:46.184069  576188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 10:31:46.414611  576188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 10:31:46.414638  576188 machine.go:96] duration metric: took 1.236519349s to provisionDockerMachine
	I0930 10:31:46.414654  576188 client.go:171] duration metric: took 9.804797803s to LocalClient.Create
	I0930 10:31:46.414708  576188 start.go:167] duration metric: took 9.804909414s to libmachine.API.Create "addons-718366"
	I0930 10:31:46.414724  576188 start.go:293] postStartSetup for "addons-718366" (driver="docker")
	I0930 10:31:46.414735  576188 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 10:31:46.414836  576188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 10:31:46.414922  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.432825  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.526839  576188 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 10:31:46.529986  576188 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0930 10:31:46.530020  576188 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0930 10:31:46.530031  576188 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0930 10:31:46.530038  576188 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0930 10:31:46.530053  576188 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-570035/.minikube/addons for local assets ...
	I0930 10:31:46.530129  576188 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-570035/.minikube/files for local assets ...
	I0930 10:31:46.530155  576188 start.go:296] duration metric: took 115.424998ms for postStartSetup
	I0930 10:31:46.530481  576188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718366
	I0930 10:31:46.546445  576188 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/config.json ...
	I0930 10:31:46.546743  576188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:31:46.546793  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.563100  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.658380  576188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0930 10:31:46.662859  576188 start.go:128] duration metric: took 10.056121452s to createHost
	I0930 10:31:46.662883  576188 start.go:83] releasing machines lock for "addons-718366", held for 10.056259303s
	I0930 10:31:46.662953  576188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718366
	I0930 10:31:46.679358  576188 ssh_runner.go:195] Run: cat /version.json
	I0930 10:31:46.679415  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.679741  576188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 10:31:46.679803  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.704694  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.707977  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.917060  576188 ssh_runner.go:195] Run: systemctl --version
	I0930 10:31:46.921195  576188 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 10:31:47.061112  576188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0930 10:31:47.065232  576188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 10:31:47.086297  576188 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0930 10:31:47.086388  576188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 10:31:47.121211  576188 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0930 10:31:47.121240  576188 start.go:495] detecting cgroup driver to use...
	I0930 10:31:47.121275  576188 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0930 10:31:47.121327  576188 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 10:31:47.138863  576188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 10:31:47.150816  576188 docker.go:217] disabling cri-docker service (if available) ...
	I0930 10:31:47.150879  576188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 10:31:47.165652  576188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 10:31:47.179926  576188 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 10:31:47.273399  576188 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 10:31:47.363581  576188 docker.go:233] disabling docker service ...
	I0930 10:31:47.363669  576188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 10:31:47.383649  576188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 10:31:47.396300  576188 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 10:31:47.479534  576188 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 10:31:47.578817  576188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 10:31:47.590693  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 10:31:47.606912  576188 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 10:31:47.606982  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.616770  576188 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 10:31:47.616838  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.626842  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.636932  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.646765  576188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 10:31:47.655795  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.665503  576188 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.681353  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.691540  576188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 10:31:47.700478  576188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 10:31:47.709442  576188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:31:47.791594  576188 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 10:31:47.910242  576188 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 10:31:47.910380  576188 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 10:31:47.913887  576188 start.go:563] Will wait 60s for crictl version
	I0930 10:31:47.913948  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:31:47.917201  576188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 10:31:47.956213  576188 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0930 10:31:47.956327  576188 ssh_runner.go:195] Run: crio --version
	I0930 10:31:47.995739  576188 ssh_runner.go:195] Run: crio --version
	I0930 10:31:48.038600  576188 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0930 10:31:48.040972  576188 cli_runner.go:164] Run: docker network inspect addons-718366 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 10:31:48.059448  576188 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0930 10:31:48.063378  576188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:31:48.074967  576188 kubeadm.go:883] updating cluster {Name:addons-718366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 10:31:48.075101  576188 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 10:31:48.075164  576188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 10:31:48.152821  576188 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 10:31:48.152846  576188 crio.go:433] Images already preloaded, skipping extraction
	I0930 10:31:48.152903  576188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 10:31:48.188287  576188 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 10:31:48.188312  576188 cache_images.go:84] Images are preloaded, skipping loading
	I0930 10:31:48.188323  576188 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0930 10:31:48.188415  576188 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-718366 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 10:31:48.188496  576188 ssh_runner.go:195] Run: crio config
	I0930 10:31:48.238352  576188 cni.go:84] Creating CNI manager for ""
	I0930 10:31:48.238376  576188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0930 10:31:48.238386  576188 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 10:31:48.238408  576188 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-718366 NodeName:addons-718366 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 10:31:48.238553  576188 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-718366"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 10:31:48.238630  576188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 10:31:48.247791  576188 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 10:31:48.247902  576188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 10:31:48.256589  576188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0930 10:31:48.274946  576188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 10:31:48.293776  576188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0930 10:31:48.312418  576188 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0930 10:31:48.315789  576188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:31:48.326439  576188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:31:48.407610  576188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:31:48.421862  576188 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366 for IP: 192.168.49.2
	I0930 10:31:48.421936  576188 certs.go:194] generating shared ca certs ...
	I0930 10:31:48.421965  576188 certs.go:226] acquiring lock for ca certs: {Name:mk1a6e0acac4c352dd045fb15e8f16e43e290be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:48.422139  576188 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-570035/.minikube/ca.key
	I0930 10:31:48.852559  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt ...
	I0930 10:31:48.852592  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt: {Name:mkf151645d175ccb0b3534f7f3a47f78c7b74bd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:48.852823  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/ca.key ...
	I0930 10:31:48.852839  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/ca.key: {Name:mk253c50c9e044c6b24426ba126fc768ae2c086d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:48.852936  576188 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.key
	I0930 10:31:49.127433  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.crt ...
	I0930 10:31:49.127472  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.crt: {Name:mk3c5c40e5e854bce5292f6c8b72b378b70a89ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:49.127671  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.key ...
	I0930 10:31:49.127693  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.key: {Name:mkccb69636b16c12bfb67aee8a9ccc8fbc4adc20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:49.127784  576188 certs.go:256] generating profile certs ...
	I0930 10:31:49.127846  576188 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.key
	I0930 10:31:49.127867  576188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt with IP's: []
	I0930 10:31:49.435254  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt ...
	I0930 10:31:49.435286  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: {Name:mkb5471f9020f84972ffa54ded95d7795d2a1016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:49.435477  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.key ...
	I0930 10:31:49.435489  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.key: {Name:mk3319c7a4b7aa7eacc7a275bdff66d1921999a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:49.435574  576188 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key.484276da
	I0930 10:31:49.435592  576188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt.484276da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0930 10:31:50.182674  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt.484276da ...
	I0930 10:31:50.182710  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt.484276da: {Name:mk6507e673c5274a73199d398bdbaf9b2d7b6554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:50.182907  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key.484276da ...
	I0930 10:31:50.182921  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key.484276da: {Name:mk737ffdf84242931763a97a2893d5f88d102eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:50.183007  576188 certs.go:381] copying /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt.484276da -> /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt
	I0930 10:31:50.183084  576188 certs.go:385] copying /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key.484276da -> /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key
	I0930 10:31:50.183135  576188 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.key
	I0930 10:31:50.183156  576188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.crt with IP's: []
	I0930 10:31:50.657677  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.crt ...
	I0930 10:31:50.657708  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.crt: {Name:mkddac17456589328bd0297cfc529913e40d6096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:50.657893  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.key ...
	I0930 10:31:50.657907  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.key: {Name:mk1da3d7241ee96e850a287589cbd33941beaf05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:50.659767  576188 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 10:31:50.659810  576188 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem (1078 bytes)
	I0930 10:31:50.659833  576188 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/cert.pem (1123 bytes)
	I0930 10:31:50.659862  576188 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/key.pem (1679 bytes)
	I0930 10:31:50.660447  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 10:31:50.684494  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 10:31:50.708442  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 10:31:50.732440  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0930 10:31:50.756657  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0930 10:31:50.780179  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 10:31:50.804081  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 10:31:50.832833  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 10:31:50.870081  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 10:31:50.894487  576188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 10:31:50.911847  576188 ssh_runner.go:195] Run: openssl version
	I0930 10:31:50.917167  576188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 10:31:50.926449  576188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:31:50.929974  576188 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:31 /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:31:50.930037  576188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:31:50.936865  576188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 10:31:50.946146  576188 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 10:31:50.949263  576188 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 10:31:50.949326  576188 kubeadm.go:392] StartCluster: {Name:addons-718366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:31:50.949411  576188 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 10:31:50.949469  576188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 10:31:50.986405  576188 cri.go:89] found id: ""
	I0930 10:31:50.986521  576188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 10:31:50.995471  576188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 10:31:51.005070  576188 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0930 10:31:51.005164  576188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 10:31:51.014498  576188 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 10:31:51.014517  576188 kubeadm.go:157] found existing configuration files:
	
	I0930 10:31:51.014593  576188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 10:31:51.023579  576188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 10:31:51.023670  576188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 10:31:51.032109  576188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 10:31:51.040792  576188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 10:31:51.040883  576188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 10:31:51.049272  576188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 10:31:51.058271  576188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 10:31:51.058357  576188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 10:31:51.067199  576188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 10:31:51.075621  576188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 10:31:51.075693  576188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 10:31:51.083850  576188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0930 10:31:51.127566  576188 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 10:31:51.127636  576188 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 10:31:51.147314  576188 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0930 10:31:51.147389  576188 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0930 10:31:51.147428  576188 kubeadm.go:310] OS: Linux
	I0930 10:31:51.147478  576188 kubeadm.go:310] CGROUPS_CPU: enabled
	I0930 10:31:51.147529  576188 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0930 10:31:51.147580  576188 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0930 10:31:51.147630  576188 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0930 10:31:51.147689  576188 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0930 10:31:51.147743  576188 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0930 10:31:51.147792  576188 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0930 10:31:51.147843  576188 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0930 10:31:51.147891  576188 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0930 10:31:51.211072  576188 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 10:31:51.211220  576188 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 10:31:51.211322  576188 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 10:31:51.217978  576188 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 10:31:51.222074  576188 out.go:235]   - Generating certificates and keys ...
	I0930 10:31:51.222200  576188 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 10:31:51.222290  576188 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 10:31:51.507541  576188 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 10:31:52.100429  576188 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 10:31:52.343512  576188 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 10:31:53.350821  576188 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 10:31:54.127332  576188 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 10:31:54.127730  576188 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-718366 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0930 10:31:55.090224  576188 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 10:31:55.090597  576188 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-718366 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0930 10:31:55.557333  576188 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 10:31:56.433561  576188 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 10:31:57.360076  576188 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 10:31:57.360372  576188 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 10:31:57.616865  576188 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 10:31:58.166068  576188 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 10:31:58.642711  576188 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 10:31:59.408755  576188 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 10:31:59.928063  576188 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 10:31:59.928676  576188 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 10:31:59.931546  576188 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 10:31:59.934534  576188 out.go:235]   - Booting up control plane ...
	I0930 10:31:59.934632  576188 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 10:31:59.934707  576188 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 10:31:59.934773  576188 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 10:31:59.943378  576188 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 10:31:59.949241  576188 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 10:31:59.949518  576188 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 10:32:00.105875  576188 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 10:32:00.106001  576188 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 10:32:01.107740  576188 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001980346s
	I0930 10:32:01.107838  576188 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 10:32:07.109888  576188 kubeadm.go:310] [api-check] The API server is healthy after 6.002182723s
	I0930 10:32:07.131339  576188 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 10:32:07.151401  576188 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 10:32:07.177130  576188 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 10:32:07.177349  576188 kubeadm.go:310] [mark-control-plane] Marking the node addons-718366 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 10:32:07.188510  576188 kubeadm.go:310] [bootstrap-token] Using token: 8aonc1.ekajo8hgoq6vth44
	I0930 10:32:07.193078  576188 out.go:235]   - Configuring RBAC rules ...
	I0930 10:32:07.193212  576188 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 10:32:07.195793  576188 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 10:32:07.203953  576188 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 10:32:07.207903  576188 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 10:32:07.211613  576188 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 10:32:07.218369  576188 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 10:32:07.519705  576188 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 10:32:07.953415  576188 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 10:32:08.516178  576188 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 10:32:08.517416  576188 kubeadm.go:310] 
	I0930 10:32:08.517508  576188 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 10:32:08.517531  576188 kubeadm.go:310] 
	I0930 10:32:08.517630  576188 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 10:32:08.517641  576188 kubeadm.go:310] 
	I0930 10:32:08.517681  576188 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 10:32:08.517745  576188 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 10:32:08.517806  576188 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 10:32:08.517818  576188 kubeadm.go:310] 
	I0930 10:32:08.517880  576188 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 10:32:08.517888  576188 kubeadm.go:310] 
	I0930 10:32:08.517935  576188 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 10:32:08.517940  576188 kubeadm.go:310] 
	I0930 10:32:08.517992  576188 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 10:32:08.518066  576188 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 10:32:08.518134  576188 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 10:32:08.518138  576188 kubeadm.go:310] 
	I0930 10:32:08.518221  576188 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 10:32:08.518298  576188 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 10:32:08.518302  576188 kubeadm.go:310] 
	I0930 10:32:08.518385  576188 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8aonc1.ekajo8hgoq6vth44 \
	I0930 10:32:08.518487  576188 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:34f1ba6de874bd896834dc114ac775d877f5b795b01506ad8bb22dc9b74f70da \
	I0930 10:32:08.518508  576188 kubeadm.go:310] 	--control-plane 
	I0930 10:32:08.518513  576188 kubeadm.go:310] 
	I0930 10:32:08.518603  576188 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 10:32:08.518608  576188 kubeadm.go:310] 
	I0930 10:32:08.518690  576188 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8aonc1.ekajo8hgoq6vth44 \
	I0930 10:32:08.518791  576188 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:34f1ba6de874bd896834dc114ac775d877f5b795b01506ad8bb22dc9b74f70da 
	I0930 10:32:08.522706  576188 kubeadm.go:310] W0930 10:31:51.124221    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:32:08.523011  576188 kubeadm.go:310] W0930 10:31:51.125105    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:32:08.523230  576188 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0930 10:32:08.523336  576188 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 10:32:08.523356  576188 cni.go:84] Creating CNI manager for ""
	I0930 10:32:08.523365  576188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0930 10:32:08.526350  576188 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0930 10:32:08.528840  576188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0930 10:32:08.532638  576188 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0930 10:32:08.532658  576188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0930 10:32:08.550943  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0930 10:32:08.822890  576188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 10:32:08.823054  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:08.823069  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-718366 minikube.k8s.io/updated_at=2024_09_30T10_32_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=addons-718366 minikube.k8s.io/primary=true
	I0930 10:32:08.983346  576188 ops.go:34] apiserver oom_adj: -16
	I0930 10:32:08.998983  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:09.500016  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:09.999359  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:10.499482  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:10.999362  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:11.499443  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:11.999113  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:12.500484  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:12.616697  576188 kubeadm.go:1113] duration metric: took 3.793709432s to wait for elevateKubeSystemPrivileges
	I0930 10:32:12.616732  576188 kubeadm.go:394] duration metric: took 21.667424713s to StartCluster
	I0930 10:32:12.616750  576188 settings.go:142] acquiring lock: {Name:mk11436cfb74a22d5df272d0ed716a2f4f11abe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:32:12.616873  576188 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-570035/kubeconfig
	I0930 10:32:12.617251  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/kubeconfig: {Name:mk2b4dce89b9a4c7357cab4707a99982ddc5b94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:32:12.617445  576188 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 10:32:12.617597  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 10:32:12.617836  576188 config.go:182] Loaded profile config "addons-718366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:32:12.617874  576188 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0930 10:32:12.617960  576188 addons.go:69] Setting yakd=true in profile "addons-718366"
	I0930 10:32:12.617979  576188 addons.go:234] Setting addon yakd=true in "addons-718366"
	I0930 10:32:12.618003  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.618496  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.618987  576188 addons.go:69] Setting inspektor-gadget=true in profile "addons-718366"
	I0930 10:32:12.619028  576188 addons.go:234] Setting addon inspektor-gadget=true in "addons-718366"
	I0930 10:32:12.619066  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.619563  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.619718  576188 addons.go:69] Setting metrics-server=true in profile "addons-718366"
	I0930 10:32:12.619732  576188 addons.go:234] Setting addon metrics-server=true in "addons-718366"
	I0930 10:32:12.619755  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.620173  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.620821  576188 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-718366"
	I0930 10:32:12.620870  576188 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-718366"
	I0930 10:32:12.620910  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.621401  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.627312  576188 addons.go:69] Setting registry=true in profile "addons-718366"
	I0930 10:32:12.627345  576188 addons.go:234] Setting addon registry=true in "addons-718366"
	I0930 10:32:12.627389  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.627879  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629127  576188 addons.go:69] Setting cloud-spanner=true in profile "addons-718366"
	I0930 10:32:12.629593  576188 addons.go:234] Setting addon cloud-spanner=true in "addons-718366"
	I0930 10:32:12.629630  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.630378  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629311  576188 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-718366"
	I0930 10:32:12.634602  576188 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-718366"
	I0930 10:32:12.634666  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.629324  576188 addons.go:69] Setting default-storageclass=true in profile "addons-718366"
	I0930 10:32:12.637049  576188 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-718366"
	I0930 10:32:12.637348  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.642682  576188 addons.go:69] Setting storage-provisioner=true in profile "addons-718366"
	I0930 10:32:12.642716  576188 addons.go:234] Setting addon storage-provisioner=true in "addons-718366"
	I0930 10:32:12.642757  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.643213  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629329  576188 addons.go:69] Setting gcp-auth=true in profile "addons-718366"
	I0930 10:32:12.652125  576188 mustload.go:65] Loading cluster: addons-718366
	I0930 10:32:12.652324  576188 config.go:182] Loaded profile config "addons-718366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:32:12.652576  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.656063  576188 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-718366"
	I0930 10:32:12.656091  576188 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-718366"
	I0930 10:32:12.656420  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.685014  576188 addons.go:69] Setting volcano=true in profile "addons-718366"
	I0930 10:32:12.685050  576188 addons.go:234] Setting addon volcano=true in "addons-718366"
	I0930 10:32:12.685092  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.685608  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629332  576188 addons.go:69] Setting ingress=true in profile "addons-718366"
	I0930 10:32:12.687633  576188 addons.go:234] Setting addon ingress=true in "addons-718366"
	I0930 10:32:12.687681  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.688210  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.705646  576188 addons.go:69] Setting volumesnapshots=true in profile "addons-718366"
	I0930 10:32:12.705685  576188 addons.go:234] Setting addon volumesnapshots=true in "addons-718366"
	I0930 10:32:12.705724  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.706207  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629336  576188 addons.go:69] Setting ingress-dns=true in profile "addons-718366"
	I0930 10:32:12.708613  576188 addons.go:234] Setting addon ingress-dns=true in "addons-718366"
	I0930 10:32:12.708663  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.709150  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629418  576188 out.go:177] * Verifying Kubernetes components...
	I0930 10:32:12.729494  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.825496  576188 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0930 10:32:12.832477  576188 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0930 10:32:12.835208  576188 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 10:32:12.835233  576188 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 10:32:12.835325  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.835432  576188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:32:12.853660  576188 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0930 10:32:12.855707  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.857751  576188 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0930 10:32:12.857864  576188 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 10:32:12.859599  576188 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:32:12.872767  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0930 10:32:12.872887  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.865884  576188 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0930 10:32:12.875361  576188 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0930 10:32:12.875445  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.866737  576188 addons.go:234] Setting addon default-storageclass=true in "addons-718366"
	I0930 10:32:12.882918  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.883376  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.887937  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0930 10:32:12.887958  576188 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0930 10:32:12.888030  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.894765  576188 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:32:12.894794  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 10:32:12.894866  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.872690  576188 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0930 10:32:12.908423  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0930 10:32:12.908785  576188 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0930 10:32:12.908833  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0930 10:32:12.908950  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.940598  576188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0930 10:32:12.941002  576188 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0930 10:32:12.946206  576188 out.go:177]   - Using image docker.io/registry:2.8.3
	I0930 10:32:12.948814  576188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:32:12.949045  576188 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0930 10:32:12.949077  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0930 10:32:12.949171  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.958006  576188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:32:12.959188  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0930 10:32:12.961743  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W0930 10:32:12.962757  576188 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0930 10:32:12.973838  576188 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:32:12.973872  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0930 10:32:12.973943  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.979512  576188 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0930 10:32:12.979700  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0930 10:32:12.985709  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0930 10:32:12.985933  576188 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:32:12.985946  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0930 10:32:12.986012  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.996257  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0930 10:32:12.996526  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.001310  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0930 10:32:13.001479  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0930 10:32:13.001508  576188 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0930 10:32:13.001634  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:13.009342  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 10:32:13.017322  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0930 10:32:13.020721  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0930 10:32:13.021813  576188 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-718366"
	I0930 10:32:13.021852  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:13.022269  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:13.032608  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0930 10:32:13.032637  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0930 10:32:13.032715  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:13.058753  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.086640  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.090634  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.123015  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.154530  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.177875  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.178807  576188 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 10:32:13.178823  576188 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 10:32:13.178880  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:13.185183  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.204407  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.206891  576188 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0930 10:32:13.209370  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.213841  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.224068  576188 out.go:177]   - Using image docker.io/busybox:stable
	I0930 10:32:13.227725  576188 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:32:13.227749  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0930 10:32:13.227816  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:13.235510  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.260318  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	W0930 10:32:13.273338  576188 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0930 10:32:13.273406  576188 retry.go:31] will retry after 227.69102ms: ssh: handshake failed: EOF
	I0930 10:32:13.394925  576188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:32:13.486745  576188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 10:32:13.486818  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0930 10:32:13.623628  576188 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0930 10:32:13.623711  576188 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0930 10:32:13.630043  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:32:13.635130  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:32:13.638091  576188 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0930 10:32:13.638162  576188 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0930 10:32:13.659361  576188 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0930 10:32:13.659438  576188 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0930 10:32:13.671231  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:32:13.673254  576188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 10:32:13.673314  576188 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 10:32:13.699306  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0930 10:32:13.699326  576188 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0930 10:32:13.702344  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:32:13.749760  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0930 10:32:13.749837  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0930 10:32:13.762014  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0930 10:32:13.776095  576188 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0930 10:32:13.776167  576188 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0930 10:32:13.783348  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 10:32:13.795504  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0930 10:32:13.795584  576188 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0930 10:32:13.809799  576188 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0930 10:32:13.809876  576188 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0930 10:32:13.867266  576188 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:32:13.867337  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0930 10:32:13.895970  576188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:32:13.896050  576188 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 10:32:13.927958  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0930 10:32:13.928037  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0930 10:32:13.932218  576188 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0930 10:32:13.932292  576188 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0930 10:32:13.950651  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:32:13.969239  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0930 10:32:13.969315  576188 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0930 10:32:13.972998  576188 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0930 10:32:13.973069  576188 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0930 10:32:14.064724  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:32:14.068228  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0930 10:32:14.068306  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0930 10:32:14.084109  576188 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0930 10:32:14.084189  576188 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0930 10:32:14.101672  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:32:14.118680  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:32:14.118751  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0930 10:32:14.128305  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0930 10:32:14.128380  576188 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0930 10:32:14.228099  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0930 10:32:14.228175  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0930 10:32:14.260067  576188 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0930 10:32:14.260147  576188 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0930 10:32:14.267263  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:32:14.286038  576188 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:32:14.286113  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0930 10:32:14.406085  576188 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0930 10:32:14.406166  576188 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0930 10:32:14.409527  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0930 10:32:14.409623  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0930 10:32:14.443415  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:32:14.478742  576188 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0930 10:32:14.478821  576188 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0930 10:32:14.482790  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0930 10:32:14.482880  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0930 10:32:14.522876  576188 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:32:14.522950  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0930 10:32:14.538265  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0930 10:32:14.538348  576188 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0930 10:32:14.599317  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:32:14.621923  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0930 10:32:14.621995  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0930 10:32:14.718338  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0930 10:32:14.718419  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0930 10:32:14.771911  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:32:14.771992  576188 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0930 10:32:14.830453  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:32:16.302802  576188 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.293410506s)
	I0930 10:32:16.302886  576188 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0930 10:32:16.303052  576188 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.908050917s)
	I0930 10:32:16.303221  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.673105181s)
	I0930 10:32:16.304869  576188 node_ready.go:35] waiting up to 6m0s for node "addons-718366" to be "Ready" ...
	I0930 10:32:16.969956  576188 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-718366" context rescaled to 1 replicas
	I0930 10:32:17.813534  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.178328626s)
	I0930 10:32:17.813663  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.142359566s)
	I0930 10:32:18.331726  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:18.989036  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.2269377s)
	I0930 10:32:18.989135  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.205718923s)
	I0930 10:32:18.989300  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.038581281s)
	I0930 10:32:18.989533  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.924735717s)
	I0930 10:32:18.990072  576188 addons.go:475] Verifying addon registry=true in "addons-718366"
	I0930 10:32:18.989162  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.2865793s)
	I0930 10:32:18.989730  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.887983209s)
	I0930 10:32:18.990430  576188 addons.go:475] Verifying addon metrics-server=true in "addons-718366"
	I0930 10:32:18.989761  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.722429624s)
	I0930 10:32:18.990693  576188 addons.go:475] Verifying addon ingress=true in "addons-718366"
	I0930 10:32:18.989832  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.546341657s)
	W0930 10:32:18.991429  576188 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:32:18.991452  576188 retry.go:31] will retry after 214.891484ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:32:18.989886  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.390493939s)
	I0930 10:32:18.993976  576188 out.go:177] * Verifying ingress addon...
	I0930 10:32:18.993993  576188 out.go:177] * Verifying registry addon...
	I0930 10:32:18.994136  576188 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-718366 service yakd-dashboard -n yakd-dashboard
	
	I0930 10:32:18.998130  576188 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0930 10:32:19.000026  576188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0930 10:32:19.012749  576188 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 10:32:19.012827  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0930 10:32:19.013748  576188 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0930 10:32:19.015873  576188 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0930 10:32:19.015899  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:19.206505  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:32:19.222406  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.391854023s)
	I0930 10:32:19.222443  576188 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-718366"
	I0930 10:32:19.225269  576188 out.go:177] * Verifying csi-hostpath-driver addon...
	I0930 10:32:19.228851  576188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0930 10:32:19.265510  576188 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 10:32:19.265536  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:19.502396  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:19.510520  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:19.733138  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:20.002773  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:20.004965  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:20.233838  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:20.503847  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:20.505878  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:20.735188  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:20.808517  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:21.005465  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:21.006508  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:21.232962  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:21.508544  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:21.510168  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:21.746490  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:21.919471  576188 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0930 10:32:21.919583  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:21.945306  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:22.005204  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:22.020654  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:22.107096  576188 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0930 10:32:22.156422  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.949861964s)
	I0930 10:32:22.161917  576188 addons.go:234] Setting addon gcp-auth=true in "addons-718366"
	I0930 10:32:22.161972  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:22.162436  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:22.180503  576188 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0930 10:32:22.180562  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:22.199581  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:22.234471  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:22.293532  576188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:32:22.295855  576188 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0930 10:32:22.298481  576188 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0930 10:32:22.298507  576188 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0930 10:32:22.327120  576188 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0930 10:32:22.327146  576188 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0930 10:32:22.354965  576188 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:32:22.354989  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0930 10:32:22.374415  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:32:22.505237  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:22.505593  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:22.733404  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:22.810784  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:22.982766  576188 addons.go:475] Verifying addon gcp-auth=true in "addons-718366"
	I0930 10:32:22.985946  576188 out.go:177] * Verifying gcp-auth addon...
	I0930 10:32:22.989503  576188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0930 10:32:22.997921  576188 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 10:32:22.997948  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:23.007118  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:23.013282  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:23.232430  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:23.492864  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:23.502671  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:23.504311  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:23.732922  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:23.993049  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:24.002595  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:24.005381  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:24.232995  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:24.492978  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:24.502914  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:24.503966  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:24.733190  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:24.993358  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:25.002805  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:25.003600  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:25.232476  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:25.308811  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:25.492564  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:25.502308  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:25.504363  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:25.732965  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:25.993474  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:26.003592  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:26.005468  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:26.232578  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:26.493164  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:26.502818  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:26.504372  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:26.732670  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:26.993385  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:27.004214  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:27.004360  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:27.232999  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:27.493904  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:27.502518  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:27.504500  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:27.732700  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:27.809256  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:27.993469  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:28.002259  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:28.005142  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:28.232398  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:28.493035  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:28.502278  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:28.503849  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:28.732758  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:28.992992  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:29.003509  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:29.004188  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:29.232281  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:29.492609  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:29.501741  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:29.504027  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:29.732607  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:29.993719  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:30.005781  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:30.006305  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:30.232478  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:30.308805  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:30.493327  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:30.502458  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:30.504010  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:30.732161  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:30.993225  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:31.002921  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:31.004619  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:31.232186  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:31.492616  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:31.501951  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:31.503335  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:31.732881  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:31.993602  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:32.003681  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:32.004106  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:32.232590  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:32.308898  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:32.492382  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:32.502524  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:32.503242  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:32.732493  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:32.993359  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:33.003345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:33.004523  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:33.232210  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:33.492895  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:33.502809  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:33.503380  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:33.732345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:33.992694  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:34.002487  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:34.005419  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:34.232668  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:34.493362  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:34.502120  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:34.503290  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:34.732832  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:34.808872  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:34.993165  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:35.002532  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:35.003792  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:35.232243  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:35.492644  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:35.502151  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:35.504388  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:35.732397  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:35.993350  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:36.004449  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:36.006027  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:36.233129  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:36.493897  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:36.503054  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:36.503156  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:36.732619  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:36.993186  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:37.003617  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:37.004328  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:37.232382  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:37.309099  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:37.492995  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:37.502362  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:37.503981  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:37.732628  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:37.992500  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:38.006378  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:38.009415  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:38.232948  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:38.493574  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:38.501907  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:38.503340  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:38.732877  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:38.993074  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:39.002160  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:39.004134  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:39.232913  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:39.492334  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:39.502072  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:39.504609  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:39.733100  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:39.808384  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:39.992997  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:40.002119  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:40.012472  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:40.232629  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:40.492673  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:40.501888  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:40.503434  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:40.732929  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:40.992943  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:41.003060  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:41.004287  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:41.232552  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:41.493144  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:41.501724  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:41.504150  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:41.732666  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:41.808700  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:41.992905  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:42.002375  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:42.004751  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:42.232856  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:42.494375  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:42.502604  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:42.503446  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:42.732867  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:42.993326  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:43.002100  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:43.004307  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:43.232852  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:43.493140  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:43.501743  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:43.503151  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:43.733043  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:43.993474  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:44.003911  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:44.004199  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:44.232444  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:44.308664  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:44.492846  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:44.502736  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:44.503109  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:44.732682  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:44.992688  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:45.002473  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:45.006372  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:45.233808  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:45.493054  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:45.502649  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:45.504224  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:45.732634  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:45.992992  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:46.003067  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:46.005020  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:46.232318  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:46.308743  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:46.493311  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:46.501833  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:46.504311  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:46.732337  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:46.993446  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:47.002979  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:47.004213  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:47.231826  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:47.493043  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:47.502555  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:47.504579  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:47.733091  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:47.992702  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:48.006318  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:48.006591  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:48.232843  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:48.309156  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:48.492630  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:48.502793  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:48.505041  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:48.732633  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:48.993020  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:49.002803  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:49.005073  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:49.232599  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:49.493358  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:49.502132  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:49.504685  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:49.732732  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:49.993101  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:50.008747  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:50.011007  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:50.232033  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:50.492811  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:50.502024  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:50.503194  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:50.732565  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:50.808123  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:50.992880  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:51.002470  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:51.004489  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:51.232566  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:51.493283  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:51.503096  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:51.504579  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:51.732498  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:51.997038  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:52.003743  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:52.004664  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:52.232961  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:52.493233  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:52.502560  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:52.504146  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:52.732196  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:52.809117  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:52.993352  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:53.002467  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:53.005258  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:53.232118  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:53.492883  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:53.503298  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:53.503937  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:53.732561  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:53.992888  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:54.003179  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:54.003621  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:54.232000  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:54.493201  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:54.502407  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:54.504047  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:54.732754  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:54.809165  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:54.993439  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:55.003745  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:55.006573  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:55.232921  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:55.532690  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:55.536564  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:55.537614  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:55.747717  576188 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 10:32:55.747798  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:55.813584  576188 node_ready.go:49] node "addons-718366" has status "Ready":"True"
	I0930 10:32:55.813696  576188 node_ready.go:38] duration metric: took 39.508639259s for node "addons-718366" to be "Ready" ...
	I0930 10:32:55.813729  576188 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:32:55.842207  576188 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dtmzl" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:56.024341  576188 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 10:32:56.024415  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:56.026608  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:56.027649  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:56.238249  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:56.510908  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:56.599871  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:56.601369  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:56.734813  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:56.993968  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:57.004113  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:57.004475  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:57.234269  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:57.349188  576188 pod_ready.go:93] pod "coredns-7c65d6cfc9-dtmzl" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.349213  576188 pod_ready.go:82] duration metric: took 1.506927684s for pod "coredns-7c65d6cfc9-dtmzl" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.349264  576188 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.354868  576188 pod_ready.go:93] pod "etcd-addons-718366" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.354894  576188 pod_ready.go:82] duration metric: took 5.614429ms for pod "etcd-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.354911  576188 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.360427  576188 pod_ready.go:93] pod "kube-apiserver-addons-718366" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.360453  576188 pod_ready.go:82] duration metric: took 5.533545ms for pod "kube-apiserver-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.360465  576188 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.366443  576188 pod_ready.go:93] pod "kube-controller-manager-addons-718366" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.366468  576188 pod_ready.go:82] duration metric: took 5.995876ms for pod "kube-controller-manager-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.366481  576188 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6d7ts" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.409203  576188 pod_ready.go:93] pod "kube-proxy-6d7ts" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.409232  576188 pod_ready.go:82] duration metric: took 42.742719ms for pod "kube-proxy-6d7ts" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.409245  576188 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.494502  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:57.504034  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:57.504588  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:57.741490  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:57.809369  576188 pod_ready.go:93] pod "kube-scheduler-addons-718366" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.809395  576188 pod_ready.go:82] duration metric: took 400.142122ms for pod "kube-scheduler-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.809406  576188 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.992791  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:58.002813  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:58.005194  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:58.235034  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:58.493263  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:58.505193  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:58.507236  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:58.735275  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:58.993601  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:59.003135  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:59.005872  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:59.234232  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:59.493712  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:59.505146  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:59.506583  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:59.734233  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:59.817196  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:32:59.996524  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:00.018042  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:00.019456  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:00.235319  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:00.493875  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:00.513018  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:00.515874  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:00.735209  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:00.993692  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:01.009352  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:01.011139  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:01.234558  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:01.493345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:01.502755  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:01.504885  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:01.734041  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:01.823332  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:01.993286  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:02.003595  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:02.005208  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:02.234246  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:02.494833  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:02.506503  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:02.507965  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:02.733979  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:02.994512  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:03.006008  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:03.008882  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:03.235987  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:03.502069  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:03.504611  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:03.508145  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:03.734477  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:03.993075  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:04.002465  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:04.005969  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:04.237150  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:04.318563  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:04.493450  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:04.503535  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:04.505295  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:04.735410  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:04.993251  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:05.004507  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:05.005793  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:05.233147  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:05.493785  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:05.503110  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:05.504756  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:05.734929  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:05.993818  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:06.005361  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:06.008120  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:06.234165  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:06.494029  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:06.506345  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:06.507733  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:06.736131  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:06.820180  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:06.997221  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:07.003917  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:07.012186  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:07.235277  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:07.494419  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:07.503987  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:07.506651  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:07.735614  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:07.993601  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:08.007216  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:08.008949  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:08.235108  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:08.492758  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:08.506875  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:08.509276  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:08.734821  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:08.996343  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:09.003494  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:09.018021  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:09.233920  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:09.322744  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:09.495622  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:09.503188  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:09.505370  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:09.733302  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:09.993442  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:10.007158  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:10.014910  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:10.236566  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:10.493122  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:10.506277  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:10.508170  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:10.734819  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:11.003392  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:11.017958  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:11.024396  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:11.241113  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:11.493717  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:11.503395  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:11.505398  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:11.734258  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:11.818701  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:11.993638  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:12.004028  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:12.005119  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:12.234546  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:12.493816  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:12.502382  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:12.504357  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:12.735120  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:12.993827  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:13.003086  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:13.005511  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:13.240764  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:13.493012  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:13.502733  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:13.504695  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:13.739103  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:13.992794  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:14.002410  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:14.004962  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:14.234182  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:14.315747  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:14.493894  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:14.502951  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:14.504325  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:14.735374  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:14.995201  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:15.008392  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:15.009511  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:15.239287  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:15.497798  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:15.505845  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:15.506265  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:15.733914  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:15.994121  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:16.002064  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:16.005323  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:16.235840  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:16.317348  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:16.493717  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:16.502559  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:16.504743  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:16.733456  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:16.993232  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:17.004117  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:17.005715  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:17.233977  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:17.493225  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:17.508853  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:17.509324  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:17.733379  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:17.993128  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:18.002969  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:18.004753  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:18.235053  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:18.318055  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:18.494182  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:18.514063  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:18.515256  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:18.741787  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:18.993437  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:19.006106  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:19.006941  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:19.238835  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:19.493578  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:19.503346  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:19.507520  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:19.735461  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:19.993675  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:20.007386  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:20.009120  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:20.234329  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:20.494059  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:20.503676  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:20.508870  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:20.734675  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:20.819054  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:20.994644  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:21.005532  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:21.006881  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:21.233747  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:21.493683  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:21.502510  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:21.505435  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:21.733595  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:21.993151  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:22.004128  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:22.007124  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:22.234355  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:22.494138  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:22.522806  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:22.523017  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:22.733192  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:22.993544  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:23.003301  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:23.005614  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:23.234009  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:23.316009  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:23.493223  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:23.502465  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:23.504091  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:23.734075  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:23.993191  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:24.005564  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:24.006266  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:24.237087  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:24.494192  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:24.509932  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:24.511585  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:24.736086  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:24.993584  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:25.002534  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:25.004467  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:25.238048  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:25.316968  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:25.493170  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:25.502257  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:25.504345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:25.735840  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:25.993750  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:26.014041  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:26.018512  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:26.234506  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:26.499206  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:26.522015  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:26.531645  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:26.734077  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:26.995142  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:27.002623  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:27.005131  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:27.234277  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:27.509630  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:27.517834  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:27.519610  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:27.734102  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:27.815917  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:27.993225  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:28.004799  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:28.007787  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:28.233431  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:28.495964  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:28.505908  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:28.507029  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:28.743601  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:28.994222  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:29.005072  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:29.005919  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:29.234475  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:29.493121  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:29.503087  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:29.505224  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:29.733867  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:29.818825  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:29.993832  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:30.003223  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:30.009270  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:30.234345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:30.493573  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:30.503172  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:30.506658  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:30.734108  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:30.997885  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:31.003703  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:31.006228  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:31.234690  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:31.492946  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:31.504905  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:31.505338  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:31.734023  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:31.993444  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:32.005887  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:32.016752  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:32.234205  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:32.316627  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:32.493299  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:32.504102  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:32.512754  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:32.735003  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:32.994944  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:33.006628  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:33.007729  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:33.234441  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:33.493806  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:33.505141  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:33.507304  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:33.738773  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:33.993624  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:34.013205  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:34.017042  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:34.233853  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:34.316867  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:34.492641  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:34.502886  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:34.503705  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:34.734286  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:34.993856  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:35.002176  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:35.004584  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:35.233492  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:35.493057  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:35.502018  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:35.503973  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:35.734314  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:35.993679  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:36.002264  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:36.008072  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:36.233857  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:36.492965  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:36.502535  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:36.504461  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:36.735017  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:36.816831  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:36.996016  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:37.008288  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:37.015405  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:37.234294  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:37.497062  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:37.504363  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:37.504553  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:37.735672  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:37.992884  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:38.005378  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:38.007796  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:38.237325  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:38.493907  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:38.505124  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:38.505765  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:38.734820  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:38.818257  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:38.994462  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:39.004598  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:39.014134  576188 kapi.go:107] duration metric: took 1m20.014106342s to wait for kubernetes.io/minikube-addons=registry ...
	I0930 10:33:39.235130  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:39.494071  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:39.503484  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:39.734794  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:39.999698  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:40.010425  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:40.242604  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:40.499596  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:40.503174  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:40.735274  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:40.993423  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:41.003329  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:41.236791  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:41.316472  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:41.494610  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:41.503610  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:41.734043  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:41.994292  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:42.002568  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:42.235021  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:42.493143  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:42.502820  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:42.733736  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:42.993069  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:43.003100  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:43.234277  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:43.317480  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:43.493236  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:43.502436  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:43.734921  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:43.992811  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:44.003086  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:44.233865  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:44.493110  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:44.502615  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:44.733541  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:44.993633  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:45.003852  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:45.234843  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:45.493514  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:45.502782  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:45.733458  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:45.817273  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:45.993706  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:46.016026  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:46.233913  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:46.498463  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:46.502757  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:46.734490  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:46.993029  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:47.004462  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:47.235637  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:47.503521  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:47.504652  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:47.741358  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:47.993918  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:48.006378  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:48.234693  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:48.315817  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:48.493248  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:48.502422  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:48.740592  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:48.993401  576188 kapi.go:107] duration metric: took 1m26.003896883s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0930 10:33:48.996461  576188 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-718366 cluster.
	I0930 10:33:48.999075  576188 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0930 10:33:49.002456  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:49.005169  576188 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0930 10:33:49.235396  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:49.503984  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:49.734511  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:50.004782  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:50.235070  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:50.323313  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:50.503830  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:50.734604  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:51.003831  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:51.234289  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:51.503943  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:51.733769  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:52.002609  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:52.234340  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:52.507200  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:52.734763  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:52.818591  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:53.004428  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:53.235787  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:53.502862  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:53.734437  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:54.007069  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:54.235077  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:54.503292  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:54.735359  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:55.002193  576188 kapi.go:107] duration metric: took 1m36.004059929s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0930 10:33:55.234033  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:55.317516  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:55.734069  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:56.234143  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:56.734127  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:57.233654  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:57.738983  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:57.816482  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:58.234471  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:58.734677  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:59.238020  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:59.734710  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:59.817182  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:00.236578  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:00.734525  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:01.234627  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:01.734546  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:01.825323  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:02.233540  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:02.738772  576188 kapi.go:107] duration metric: took 1m43.50991885s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0930 10:34:02.744119  576188 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0930 10:34:02.746979  576188 addons.go:510] duration metric: took 1m50.129091289s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0930 10:34:04.316300  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:06.815648  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:09.315052  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:11.315816  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:13.316065  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:15.316190  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:16.315831  576188 pod_ready.go:93] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"True"
	I0930 10:34:16.315861  576188 pod_ready.go:82] duration metric: took 1m18.506446968s for pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace to be "Ready" ...
	I0930 10:34:16.315874  576188 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4vhfz" in "kube-system" namespace to be "Ready" ...
	I0930 10:34:16.321502  576188 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4vhfz" in "kube-system" namespace has status "Ready":"True"
	I0930 10:34:16.321532  576188 pod_ready.go:82] duration metric: took 5.649022ms for pod "nvidia-device-plugin-daemonset-4vhfz" in "kube-system" namespace to be "Ready" ...
	I0930 10:34:16.321583  576188 pod_ready.go:39] duration metric: took 1m20.507828006s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:34:16.321605  576188 api_server.go:52] waiting for apiserver process to appear ...
	I0930 10:34:16.321638  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 10:34:16.321706  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 10:34:16.386809  576188 cri.go:89] found id: "162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:16.386886  576188 cri.go:89] found id: ""
	I0930 10:34:16.386900  576188 logs.go:276] 1 containers: [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b]
	I0930 10:34:16.386984  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.391025  576188 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 10:34:16.391106  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 10:34:16.435062  576188 cri.go:89] found id: "c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:16.435085  576188 cri.go:89] found id: ""
	I0930 10:34:16.435094  576188 logs.go:276] 1 containers: [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70]
	I0930 10:34:16.435153  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.438701  576188 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 10:34:16.438773  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 10:34:16.478714  576188 cri.go:89] found id: "8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:16.478737  576188 cri.go:89] found id: ""
	I0930 10:34:16.478746  576188 logs.go:276] 1 containers: [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e]
	I0930 10:34:16.478802  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.482397  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 10:34:16.482471  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 10:34:16.537909  576188 cri.go:89] found id: "f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:16.537932  576188 cri.go:89] found id: ""
	I0930 10:34:16.537940  576188 logs.go:276] 1 containers: [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf]
	I0930 10:34:16.538010  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.541631  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 10:34:16.541707  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 10:34:16.584294  576188 cri.go:89] found id: "d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:16.584316  576188 cri.go:89] found id: ""
	I0930 10:34:16.584324  576188 logs.go:276] 1 containers: [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6]
	I0930 10:34:16.584387  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.588121  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 10:34:16.588197  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 10:34:16.627920  576188 cri.go:89] found id: "8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:16.627943  576188 cri.go:89] found id: ""
	I0930 10:34:16.627951  576188 logs.go:276] 1 containers: [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce]
	I0930 10:34:16.628010  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.631831  576188 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 10:34:16.631910  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 10:34:16.670917  576188 cri.go:89] found id: "97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:16.670987  576188 cri.go:89] found id: ""
	I0930 10:34:16.671002  576188 logs.go:276] 1 containers: [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c]
	I0930 10:34:16.671067  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.674818  576188 logs.go:123] Gathering logs for dmesg ...
	I0930 10:34:16.674843  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 10:34:16.691258  576188 logs.go:123] Gathering logs for etcd [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70] ...
	I0930 10:34:16.691286  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:16.781066  576188 logs.go:123] Gathering logs for kube-proxy [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6] ...
	I0930 10:34:16.781106  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:16.824438  576188 logs.go:123] Gathering logs for kindnet [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c] ...
	I0930 10:34:16.824473  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:16.883060  576188 logs.go:123] Gathering logs for CRI-O ...
	I0930 10:34:16.883091  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 10:34:16.989887  576188 logs.go:123] Gathering logs for kubelet ...
	I0930 10:34:16.989925  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0930 10:34:17.064721  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541514    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.064968  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541583    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.065190  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541651    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.065432  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.065664  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.065898  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.067781  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.067995  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:17.104140  576188 logs.go:123] Gathering logs for describe nodes ...
	I0930 10:34:17.104180  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 10:34:17.291559  576188 logs.go:123] Gathering logs for kube-apiserver [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b] ...
	I0930 10:34:17.291591  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:17.344411  576188 logs.go:123] Gathering logs for coredns [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e] ...
	I0930 10:34:17.344446  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:17.394328  576188 logs.go:123] Gathering logs for kube-scheduler [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf] ...
	I0930 10:34:17.394358  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:17.437492  576188 logs.go:123] Gathering logs for kube-controller-manager [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce] ...
	I0930 10:34:17.437522  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:17.506642  576188 logs.go:123] Gathering logs for container status ...
	I0930 10:34:17.506679  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 10:34:17.557358  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:17.557386  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0930 10:34:17.557577  576188 out.go:270] X Problems detected in kubelet:
	W0930 10:34:17.557600  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.557623  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.557643  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.557652  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.557663  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:17.557670  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:17.557678  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:34:27.559396  576188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:34:27.573481  576188 api_server.go:72] duration metric: took 2m14.955998532s to wait for apiserver process to appear ...
	I0930 10:34:27.573512  576188 api_server.go:88] waiting for apiserver healthz status ...
	I0930 10:34:27.573570  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 10:34:27.573627  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 10:34:27.612157  576188 cri.go:89] found id: "162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:27.612193  576188 cri.go:89] found id: ""
	I0930 10:34:27.612201  576188 logs.go:276] 1 containers: [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b]
	I0930 10:34:27.612290  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.615922  576188 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 10:34:27.615995  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 10:34:27.657373  576188 cri.go:89] found id: "c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:27.657395  576188 cri.go:89] found id: ""
	I0930 10:34:27.657413  576188 logs.go:276] 1 containers: [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70]
	I0930 10:34:27.657473  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.661114  576188 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 10:34:27.661186  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 10:34:27.699276  576188 cri.go:89] found id: "8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:27.699300  576188 cri.go:89] found id: ""
	I0930 10:34:27.699309  576188 logs.go:276] 1 containers: [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e]
	I0930 10:34:27.699385  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.703275  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 10:34:27.703356  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 10:34:27.743333  576188 cri.go:89] found id: "f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:27.743353  576188 cri.go:89] found id: ""
	I0930 10:34:27.743361  576188 logs.go:276] 1 containers: [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf]
	I0930 10:34:27.743432  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.746997  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 10:34:27.747079  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 10:34:27.787583  576188 cri.go:89] found id: "d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:27.787605  576188 cri.go:89] found id: ""
	I0930 10:34:27.787613  576188 logs.go:276] 1 containers: [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6]
	I0930 10:34:27.787691  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.791098  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 10:34:27.791173  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 10:34:27.850541  576188 cri.go:89] found id: "8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:27.850563  576188 cri.go:89] found id: ""
	I0930 10:34:27.850575  576188 logs.go:276] 1 containers: [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce]
	I0930 10:34:27.850631  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.854249  576188 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 10:34:27.854319  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 10:34:27.893234  576188 cri.go:89] found id: "97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:27.893260  576188 cri.go:89] found id: ""
	I0930 10:34:27.893268  576188 logs.go:276] 1 containers: [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c]
	I0930 10:34:27.893322  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.897133  576188 logs.go:123] Gathering logs for container status ...
	I0930 10:34:27.897160  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 10:34:27.951284  576188 logs.go:123] Gathering logs for coredns [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e] ...
	I0930 10:34:27.951319  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:28.003152  576188 logs.go:123] Gathering logs for kube-scheduler [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf] ...
	I0930 10:34:28.003184  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:28.043478  576188 logs.go:123] Gathering logs for kube-controller-manager [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce] ...
	I0930 10:34:28.043557  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:28.115108  576188 logs.go:123] Gathering logs for kindnet [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c] ...
	I0930 10:34:28.115147  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:28.159435  576188 logs.go:123] Gathering logs for CRI-O ...
	I0930 10:34:28.159461  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 10:34:28.258636  576188 logs.go:123] Gathering logs for kube-proxy [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6] ...
	I0930 10:34:28.258677  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:28.302989  576188 logs.go:123] Gathering logs for kubelet ...
	I0930 10:34:28.303015  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0930 10:34:28.370971  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541514    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.371245  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541583    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.371445  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541651    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.371681  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.371871  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.372100  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.373981  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.374197  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:28.410471  576188 logs.go:123] Gathering logs for dmesg ...
	I0930 10:34:28.410499  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 10:34:28.427272  576188 logs.go:123] Gathering logs for describe nodes ...
	I0930 10:34:28.427345  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 10:34:28.564680  576188 logs.go:123] Gathering logs for kube-apiserver [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b] ...
	I0930 10:34:28.564708  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:28.622261  576188 logs.go:123] Gathering logs for etcd [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70] ...
	I0930 10:34:28.622295  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:28.714780  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:28.714813  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0930 10:34:28.714867  576188 out.go:270] X Problems detected in kubelet:
	W0930 10:34:28.714881  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.714889  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.714916  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.714924  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.714934  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:28.714940  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:28.714947  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:34:38.716957  576188 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0930 10:34:38.725719  576188 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0930 10:34:38.726755  576188 api_server.go:141] control plane version: v1.31.1
	I0930 10:34:38.726784  576188 api_server.go:131] duration metric: took 11.153263628s to wait for apiserver health ...
	I0930 10:34:38.726809  576188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 10:34:38.726837  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 10:34:38.726904  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 10:34:38.773675  576188 cri.go:89] found id: "162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:38.773695  576188 cri.go:89] found id: ""
	I0930 10:34:38.773703  576188 logs.go:276] 1 containers: [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b]
	I0930 10:34:38.773769  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.777305  576188 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 10:34:38.777389  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 10:34:38.819225  576188 cri.go:89] found id: "c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:38.819245  576188 cri.go:89] found id: ""
	I0930 10:34:38.819254  576188 logs.go:276] 1 containers: [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70]
	I0930 10:34:38.819313  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.823902  576188 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 10:34:38.823980  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 10:34:38.865257  576188 cri.go:89] found id: "8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:38.865278  576188 cri.go:89] found id: ""
	I0930 10:34:38.865301  576188 logs.go:276] 1 containers: [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e]
	I0930 10:34:38.865358  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.869041  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 10:34:38.869123  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 10:34:38.909299  576188 cri.go:89] found id: "f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:38.909323  576188 cri.go:89] found id: ""
	I0930 10:34:38.909331  576188 logs.go:276] 1 containers: [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf]
	I0930 10:34:38.909388  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.912958  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 10:34:38.913039  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 10:34:38.951466  576188 cri.go:89] found id: "d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:38.951489  576188 cri.go:89] found id: ""
	I0930 10:34:38.951497  576188 logs.go:276] 1 containers: [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6]
	I0930 10:34:38.951555  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.955148  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 10:34:38.955250  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 10:34:38.999433  576188 cri.go:89] found id: "8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:38.999506  576188 cri.go:89] found id: ""
	I0930 10:34:38.999523  576188 logs.go:276] 1 containers: [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce]
	I0930 10:34:38.999588  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:39.003640  576188 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 10:34:39.003758  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 10:34:39.042975  576188 cri.go:89] found id: "97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:39.043045  576188 cri.go:89] found id: ""
	I0930 10:34:39.043060  576188 logs.go:276] 1 containers: [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c]
	I0930 10:34:39.043118  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:39.046722  576188 logs.go:123] Gathering logs for kube-controller-manager [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce] ...
	I0930 10:34:39.046747  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:39.115864  576188 logs.go:123] Gathering logs for kubelet ...
	I0930 10:34:39.115902  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0930 10:34:39.186356  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541514    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.186605  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541583    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.186799  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541651    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.187028  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.187213  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.187443  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.189229  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.189452  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:39.226877  576188 logs.go:123] Gathering logs for dmesg ...
	I0930 10:34:39.226918  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 10:34:39.244214  576188 logs.go:123] Gathering logs for etcd [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70] ...
	I0930 10:34:39.244244  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:39.303635  576188 logs.go:123] Gathering logs for kube-scheduler [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf] ...
	I0930 10:34:39.303672  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:39.346611  576188 logs.go:123] Gathering logs for kube-proxy [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6] ...
	I0930 10:34:39.346643  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:39.385388  576188 logs.go:123] Gathering logs for kindnet [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c] ...
	I0930 10:34:39.385425  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:39.449017  576188 logs.go:123] Gathering logs for CRI-O ...
	I0930 10:34:39.449056  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 10:34:39.546447  576188 logs.go:123] Gathering logs for container status ...
	I0930 10:34:39.546489  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 10:34:39.596349  576188 logs.go:123] Gathering logs for describe nodes ...
	I0930 10:34:39.596379  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 10:34:39.729086  576188 logs.go:123] Gathering logs for kube-apiserver [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b] ...
	I0930 10:34:39.729117  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:39.806140  576188 logs.go:123] Gathering logs for coredns [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e] ...
	I0930 10:34:39.806172  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:39.854236  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:39.854262  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0930 10:34:39.854352  576188 out.go:270] X Problems detected in kubelet:
	W0930 10:34:39.854377  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.854389  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.854401  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.854407  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.854414  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:39.854426  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:39.854432  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:34:49.868953  576188 system_pods.go:59] 18 kube-system pods found
	I0930 10:34:49.868992  576188 system_pods.go:61] "coredns-7c65d6cfc9-dtmzl" [7a2c2f43-a853-49df-b58f-e6f6141a2737] Running
	I0930 10:34:49.869001  576188 system_pods.go:61] "csi-hostpath-attacher-0" [66af66d7-7f8b-4650-a4b6-9b5162ab76a1] Running
	I0930 10:34:49.869006  576188 system_pods.go:61] "csi-hostpath-resizer-0" [2d3f5d2f-f058-4a6b-b1c3-f25d3d549257] Running
	I0930 10:34:49.869012  576188 system_pods.go:61] "csi-hostpathplugin-mzdc5" [1f8386ff-e365-4d77-85c4-4380cc952f88] Running
	I0930 10:34:49.869048  576188 system_pods.go:61] "etcd-addons-718366" [41eb7870-f127-4cfa-8bb3-b32081bec033] Running
	I0930 10:34:49.869053  576188 system_pods.go:61] "kindnet-cx2x5" [cc2b53ef-4eba-4f69-a5e3-d3b1b8aee067] Running
	I0930 10:34:49.869062  576188 system_pods.go:61] "kube-apiserver-addons-718366" [d591a564-dc70-47d3-9e30-ac55eb92f702] Running
	I0930 10:34:49.869066  576188 system_pods.go:61] "kube-controller-manager-addons-718366" [566dbcee-1187-41f2-aaf4-b462be8fedc8] Running
	I0930 10:34:49.869079  576188 system_pods.go:61] "kube-ingress-dns-minikube" [201cdd5a-777d-406a-a3c3-ae55dfa26b03] Running
	I0930 10:34:49.869083  576188 system_pods.go:61] "kube-proxy-6d7ts" [1c00ed0e-dc57-4a81-b778-b92a64f0e0c1] Running
	I0930 10:34:49.869087  576188 system_pods.go:61] "kube-scheduler-addons-718366" [2159256d-1219-4d6d-9ec4-10a229c89118] Running
	I0930 10:34:49.869092  576188 system_pods.go:61] "metrics-server-84c5f94fbc-jqf86" [37c7c588-691f-43b1-bc7e-d9d29b8c740e] Running
	I0930 10:34:49.869130  576188 system_pods.go:61] "nvidia-device-plugin-daemonset-4vhfz" [409875b6-caeb-49b0-a6a3-4adab5c26abf] Running
	I0930 10:34:49.869143  576188 system_pods.go:61] "registry-66c9cd494c-zx9j9" [a2779ea5-90ce-41c6-800a-4fd0e62455e1] Running
	I0930 10:34:49.869147  576188 system_pods.go:61] "registry-proxy-nxhd5" [78962db4-c230-431b-b141-405fd6389146] Running
	I0930 10:34:49.869151  576188 system_pods.go:61] "snapshot-controller-56fcc65765-fnzp5" [00072f66-80b8-45a4-b940-6db1fba0c14b] Running
	I0930 10:34:49.869156  576188 system_pods.go:61] "snapshot-controller-56fcc65765-rtd66" [e61b1df1-f9b8-4ed6-b8bb-30c16e9e1a30] Running
	I0930 10:34:49.869160  576188 system_pods.go:61] "storage-provisioner" [fcd0fbac-220e-4dd5-a1a6-3ecae26b1962] Running
	I0930 10:34:49.869169  576188 system_pods.go:74] duration metric: took 11.14235034s to wait for pod list to return data ...
	I0930 10:34:49.869180  576188 default_sa.go:34] waiting for default service account to be created ...
	I0930 10:34:49.872043  576188 default_sa.go:45] found service account: "default"
	I0930 10:34:49.872072  576188 default_sa.go:55] duration metric: took 2.885942ms for default service account to be created ...
	I0930 10:34:49.872082  576188 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 10:34:49.882723  576188 system_pods.go:86] 18 kube-system pods found
	I0930 10:34:49.882762  576188 system_pods.go:89] "coredns-7c65d6cfc9-dtmzl" [7a2c2f43-a853-49df-b58f-e6f6141a2737] Running
	I0930 10:34:49.882770  576188 system_pods.go:89] "csi-hostpath-attacher-0" [66af66d7-7f8b-4650-a4b6-9b5162ab76a1] Running
	I0930 10:34:49.882794  576188 system_pods.go:89] "csi-hostpath-resizer-0" [2d3f5d2f-f058-4a6b-b1c3-f25d3d549257] Running
	I0930 10:34:49.882803  576188 system_pods.go:89] "csi-hostpathplugin-mzdc5" [1f8386ff-e365-4d77-85c4-4380cc952f88] Running
	I0930 10:34:49.882815  576188 system_pods.go:89] "etcd-addons-718366" [41eb7870-f127-4cfa-8bb3-b32081bec033] Running
	I0930 10:34:49.882820  576188 system_pods.go:89] "kindnet-cx2x5" [cc2b53ef-4eba-4f69-a5e3-d3b1b8aee067] Running
	I0930 10:34:49.882825  576188 system_pods.go:89] "kube-apiserver-addons-718366" [d591a564-dc70-47d3-9e30-ac55eb92f702] Running
	I0930 10:34:49.882835  576188 system_pods.go:89] "kube-controller-manager-addons-718366" [566dbcee-1187-41f2-aaf4-b462be8fedc8] Running
	I0930 10:34:49.882840  576188 system_pods.go:89] "kube-ingress-dns-minikube" [201cdd5a-777d-406a-a3c3-ae55dfa26b03] Running
	I0930 10:34:49.882845  576188 system_pods.go:89] "kube-proxy-6d7ts" [1c00ed0e-dc57-4a81-b778-b92a64f0e0c1] Running
	I0930 10:34:49.882855  576188 system_pods.go:89] "kube-scheduler-addons-718366" [2159256d-1219-4d6d-9ec4-10a229c89118] Running
	I0930 10:34:49.882859  576188 system_pods.go:89] "metrics-server-84c5f94fbc-jqf86" [37c7c588-691f-43b1-bc7e-d9d29b8c740e] Running
	I0930 10:34:49.882882  576188 system_pods.go:89] "nvidia-device-plugin-daemonset-4vhfz" [409875b6-caeb-49b0-a6a3-4adab5c26abf] Running
	I0930 10:34:49.882887  576188 system_pods.go:89] "registry-66c9cd494c-zx9j9" [a2779ea5-90ce-41c6-800a-4fd0e62455e1] Running
	I0930 10:34:49.882891  576188 system_pods.go:89] "registry-proxy-nxhd5" [78962db4-c230-431b-b141-405fd6389146] Running
	I0930 10:34:49.882913  576188 system_pods.go:89] "snapshot-controller-56fcc65765-fnzp5" [00072f66-80b8-45a4-b940-6db1fba0c14b] Running
	I0930 10:34:49.882918  576188 system_pods.go:89] "snapshot-controller-56fcc65765-rtd66" [e61b1df1-f9b8-4ed6-b8bb-30c16e9e1a30] Running
	I0930 10:34:49.882927  576188 system_pods.go:89] "storage-provisioner" [fcd0fbac-220e-4dd5-a1a6-3ecae26b1962] Running
	I0930 10:34:49.882936  576188 system_pods.go:126] duration metric: took 10.846857ms to wait for k8s-apps to be running ...
	I0930 10:34:49.882947  576188 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 10:34:49.883021  576188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:34:49.895565  576188 system_svc.go:56] duration metric: took 12.60696ms WaitForService to wait for kubelet
	I0930 10:34:49.895595  576188 kubeadm.go:582] duration metric: took 2m37.278117729s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:34:49.895621  576188 node_conditions.go:102] verifying NodePressure condition ...
	I0930 10:34:49.898702  576188 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0930 10:34:49.898735  576188 node_conditions.go:123] node cpu capacity is 2
	I0930 10:34:49.898746  576188 node_conditions.go:105] duration metric: took 3.119274ms to run NodePressure ...
	I0930 10:34:49.898785  576188 start.go:241] waiting for startup goroutines ...
	I0930 10:34:49.898799  576188 start.go:246] waiting for cluster config update ...
	I0930 10:34:49.898824  576188 start.go:255] writing updated cluster config ...
	I0930 10:34:49.899193  576188 ssh_runner.go:195] Run: rm -f paused
	I0930 10:34:50.233812  576188 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 10:34:50.238556  576188 out.go:177] * Done! kubectl is now configured to use "addons-718366" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 10:44:04 addons-718366 crio[968]: time="2024-09-30 10:44:04.218760928Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:426bd6f59b95ddfd8fefae999db0d5cf56cafb7a3a8bb29e62e2aeb22408342d UID:e066e902-ae7c-4eca-8494-b09d0ac67bce NetNS:/var/run/netns/1911dfd9-1809-411c-9748-3c2989315c0f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 30 10:44:04 addons-718366 crio[968]: time="2024-09-30 10:44:04.218895480Z" level=info msg="Deleting pod default_registry-test from CNI network \"kindnet\" (type=ptp)"
	Sep 30 10:44:04 addons-718366 crio[968]: time="2024-09-30 10:44:04.244357741Z" level=info msg="Stopped pod sandbox: 426bd6f59b95ddfd8fefae999db0d5cf56cafb7a3a8bb29e62e2aeb22408342d" id=199b62a5-6e6c-4636-a099-1fc54b7d93bd name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 30 10:44:04 addons-718366 crio[968]: time="2024-09-30 10:44:04.925784770Z" level=info msg="Stopping container: 27cb02aeefc4fac55d5bcd5375677ec070b3092fdf6ec5f9485e548b2b33c68a (timeout: 30s)" id=b7073169-8d89-432b-89df-0df9e0bd4517 name=/runtime.v1.RuntimeService/StopContainer
	Sep 30 10:44:04 addons-718366 conmon[3293]: conmon 27cb02aeefc4fac55d5b <ninfo>: container 3305 exited with status 2
	Sep 30 10:44:04 addons-718366 crio[968]: time="2024-09-30 10:44:04.959630033Z" level=info msg="Stopping container: 68aaee132aa27317dc65602685ff7bc3e72debee409f831a9a004bb03c6bc64e (timeout: 30s)" id=93edea9d-4f85-42a5-93f5-45619ff702b0 name=/runtime.v1.RuntimeService/StopContainer
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.115112735Z" level=info msg="Stopped container 27cb02aeefc4fac55d5bcd5375677ec070b3092fdf6ec5f9485e548b2b33c68a: kube-system/registry-66c9cd494c-zx9j9/registry" id=b7073169-8d89-432b-89df-0df9e0bd4517 name=/runtime.v1.RuntimeService/StopContainer
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.118965384Z" level=info msg="Stopping pod sandbox: db9ba6ce949d4b8dec6b36084aabaa21ef2df6baf4983cd1ec0ade80dc77e442" id=837e060d-7a23-411c-b992-36c897b57a1e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.119214527Z" level=info msg="Got pod network &{Name:registry-66c9cd494c-zx9j9 Namespace:kube-system ID:db9ba6ce949d4b8dec6b36084aabaa21ef2df6baf4983cd1ec0ade80dc77e442 UID:a2779ea5-90ce-41c6-800a-4fd0e62455e1 NetNS:/var/run/netns/0e707c81-d038-4cef-83de-0e43c661a89b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.119346873Z" level=info msg="Deleting pod kube-system_registry-66c9cd494c-zx9j9 from CNI network \"kindnet\" (type=ptp)"
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.129989414Z" level=info msg="Stopped container 68aaee132aa27317dc65602685ff7bc3e72debee409f831a9a004bb03c6bc64e: kube-system/registry-proxy-nxhd5/registry-proxy" id=93edea9d-4f85-42a5-93f5-45619ff702b0 name=/runtime.v1.RuntimeService/StopContainer
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.131750610Z" level=info msg="Stopping pod sandbox: 5cd1acb30fe9c57544b3e170bc17faee99e9fa3c43d440c8c452a6034c206448" id=5c68c50b-5830-4f1a-bee0-8f03b36f358f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.138239175Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-TKC2RL6Q5WBCAISP - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-GDKGD3R5TBCYOFQK - [0:0]\n:KUBE-HP-3VWIIGBC3OY6AQDR - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-7w899_ingress-nginx_f8973837-432c-4179-90fb-061f9cdc391b_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-GDKGD3R5TBCYOFQK\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-7w899_ingress-nginx_f8973837-432c-4179-90fb-061f9cdc391b_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-TKC2RL6Q5WBCAISP\n-A KUBE-HP-GDKGD3R5TBCYOFQK -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-7w899_ingress-nginx_f8973837-432c-4179-90fb-061f9cdc391b_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-GDKGD3R5TBCYOFQK -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-7w899_ingress-nginx_f8973837-432c-4179-90
fb-061f9cdc391b_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.20:443\n-A KUBE-HP-TKC2RL6Q5WBCAISP -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-7w899_ingress-nginx_f8973837-432c-4179-90fb-061f9cdc391b_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-TKC2RL6Q5WBCAISP -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-7w899_ingress-nginx_f8973837-432c-4179-90fb-061f9cdc391b_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.20:80\n-X KUBE-HP-3VWIIGBC3OY6AQDR\nCOMMIT\n"
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.142995819Z" level=info msg="Closing host port tcp:5000"
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.146598063Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.146817298Z" level=info msg="Got pod network &{Name:registry-proxy-nxhd5 Namespace:kube-system ID:5cd1acb30fe9c57544b3e170bc17faee99e9fa3c43d440c8c452a6034c206448 UID:78962db4-c230-431b-b141-405fd6389146 NetNS:/var/run/netns/28df2d6c-aad2-400a-9c16-6295e3b65d28 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.146963625Z" level=info msg="Deleting pod kube-system_registry-proxy-nxhd5 from CNI network \"kindnet\" (type=ptp)"
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.185015982Z" level=info msg="Stopped pod sandbox: db9ba6ce949d4b8dec6b36084aabaa21ef2df6baf4983cd1ec0ade80dc77e442" id=837e060d-7a23-411c-b992-36c897b57a1e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.193512999Z" level=info msg="Removing container: 27cb02aeefc4fac55d5bcd5375677ec070b3092fdf6ec5f9485e548b2b33c68a" id=94456f42-d248-480f-afe9-caf7ee3f052c name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.205905619Z" level=info msg="Stopped pod sandbox: 5cd1acb30fe9c57544b3e170bc17faee99e9fa3c43d440c8c452a6034c206448" id=5c68c50b-5830-4f1a-bee0-8f03b36f358f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.238620825Z" level=info msg="Removed container 27cb02aeefc4fac55d5bcd5375677ec070b3092fdf6ec5f9485e548b2b33c68a: kube-system/registry-66c9cd494c-zx9j9/registry" id=94456f42-d248-480f-afe9-caf7ee3f052c name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.857686530Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ec878ad2-c8b9-4245-8cc8-798110832580 name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:44:05 addons-718366 crio[968]: time="2024-09-30 10:44:05.857927886Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ec878ad2-c8b9-4245-8cc8-798110832580 name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:44:06 addons-718366 crio[968]: time="2024-09-30 10:44:06.204588314Z" level=info msg="Removing container: 68aaee132aa27317dc65602685ff7bc3e72debee409f831a9a004bb03c6bc64e" id=104b47b3-ab1b-4c7e-9429-b86e43a19bef name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 30 10:44:06 addons-718366 crio[968]: time="2024-09-30 10:44:06.246926983Z" level=info msg="Removed container 68aaee132aa27317dc65602685ff7bc3e72debee409f831a9a004bb03c6bc64e: kube-system/registry-proxy-nxhd5/registry-proxy" id=104b47b3-ab1b-4c7e-9429-b86e43a19bef name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	5c8d71a37f788       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             10 minutes ago      Running             controller                 0                   b191e28d983d7       ingress-nginx-controller-bc57996ff-7w899
	f7e963bb19262       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 10 minutes ago      Running             gcp-auth                   0                   c248cf7b4c141       gcp-auth-89d5ffd79-4zcrm
	2babe6190e81a       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              10 minutes ago      Running             yakd                       0                   d17238ba7148c       yakd-dashboard-67d98fc6b-bjwsv
	f6af04ac4123e       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                     10 minutes ago      Running             nvidia-device-plugin-ctr   0                   7540dac0ccdc7       nvidia-device-plugin-daemonset-4vhfz
	2db2a4c8a8db1       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             10 minutes ago      Exited              patch                      1                   b7854eba67a5d       ingress-nginx-admission-patch-t2fbr
	796bea4cf337f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   10 minutes ago      Exited              create                     0                   c25003ea82c06       ingress-nginx-admission-create-m827b
	d5d130ee164aa       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             10 minutes ago      Running             local-path-provisioner     0                   3b673820921ac       local-path-provisioner-86d989889c-stxvp
	7f69273237e01       gcr.io/cloud-spanner-emulator/emulator@sha256:6ce1265c73355797b34d2531c7146eed3996346f860517e35d1434182eb5f01d               10 minutes ago      Running             cloud-spanner-emulator     0                   f2b1f198facf7       cloud-spanner-emulator-5b584cc74-jgnx2
	7f25834811580       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        10 minutes ago      Running             metrics-server             0                   3898176a51cc9       metrics-server-84c5f94fbc-jqf86
	eddd4385f0594       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             11 minutes ago      Running             minikube-ingress-dns       0                   f8757e1329997       kube-ingress-dns-minikube
	8564280a03e37       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             11 minutes ago      Running             storage-provisioner        0                   8ea3828da6af2       storage-provisioner
	8970b526b14d3       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             11 minutes ago      Running             coredns                    0                   fc8f26b163074       coredns-7c65d6cfc9-dtmzl
	8432f7c87eb37       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            11 minutes ago      Running             gadget                     0                   6d70046ed9d17       gadget-ltftl
	97d43354c9c18       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             11 minutes ago      Running             kindnet-cni                0                   b504155dced4a       kindnet-cx2x5
	d94629297e53a       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             11 minutes ago      Running             kube-proxy                 0                   42061a7582848       kube-proxy-6d7ts
	f46dcd2ffd212       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             12 minutes ago      Running             kube-scheduler             0                   4328be32dbdda       kube-scheduler-addons-718366
	8427a90f7890f       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             12 minutes ago      Running             kube-controller-manager    0                   61fcf6c446cf3       kube-controller-manager-addons-718366
	162d3240be19c       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             12 minutes ago      Running             kube-apiserver             0                   c729a09320dc3       kube-apiserver-addons-718366
	c0e6564b9b165       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             12 minutes ago      Running             etcd                       0                   e280627a20055       etcd-addons-718366
	
	
	==> coredns [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e] <==
	[INFO] 10.244.0.16:51134 - 3862 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00009631s
	[INFO] 10.244.0.16:51134 - 28713 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002595396s
	[INFO] 10.244.0.16:51134 - 38278 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002469697s
	[INFO] 10.244.0.16:51134 - 60275 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000123813s
	[INFO] 10.244.0.16:51134 - 20504 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000921133s
	[INFO] 10.244.0.16:42758 - 57933 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000109519s
	[INFO] 10.244.0.16:42758 - 58163 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000222452s
	[INFO] 10.244.0.16:46219 - 49034 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000046514s
	[INFO] 10.244.0.16:46219 - 48861 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090599s
	[INFO] 10.244.0.16:38841 - 23335 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049238s
	[INFO] 10.244.0.16:38841 - 23162 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000120719s
	[INFO] 10.244.0.16:36810 - 56287 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001370566s
	[INFO] 10.244.0.16:36810 - 56459 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001599623s
	[INFO] 10.244.0.16:53737 - 45454 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071399s
	[INFO] 10.244.0.16:53737 - 45306 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067083s
	[INFO] 10.244.0.19:60043 - 33793 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000164099s
	[INFO] 10.244.0.19:45569 - 24882 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00008945s
	[INFO] 10.244.0.19:37408 - 63394 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120645s
	[INFO] 10.244.0.19:32799 - 53535 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000581761s
	[INFO] 10.244.0.19:55061 - 24202 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127028s
	[INFO] 10.244.0.19:52877 - 28567 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000066952s
	[INFO] 10.244.0.19:41260 - 35512 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00224238s
	[INFO] 10.244.0.19:50161 - 49874 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002246064s
	[INFO] 10.244.0.19:55943 - 60460 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001310399s
	[INFO] 10.244.0.19:51706 - 64277 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001688868s
	
	
	==> describe nodes <==
	Name:               addons-718366
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-718366
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=addons-718366
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T10_32_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-718366
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 10:32:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-718366
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 10:44:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 10:43:41 +0000   Mon, 30 Sep 2024 10:32:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 10:43:41 +0000   Mon, 30 Sep 2024 10:32:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 10:43:41 +0000   Mon, 30 Sep 2024 10:32:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 10:43:41 +0000   Mon, 30 Sep 2024 10:32:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-718366
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 77d89a06001e4e62a34a490bff9aa946
	  System UUID:                905a5f23-cdd8-48a6-a301-0dc3d894de03
	  Boot ID:                    cd5783c9-92b8-4cba-8495-065a6f022f89
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-5b584cc74-jgnx2      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-ltftl                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-4zcrm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-7w899    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-dtmzl                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-addons-718366                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-cx2x5                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-718366                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-718366       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-6d7ts                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-718366                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-jqf86             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-4vhfz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-stxvp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-bjwsv              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-718366 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-718366 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-718366 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node addons-718366 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node addons-718366 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node addons-718366 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node addons-718366 event: Registered Node addons-718366 in Controller
	  Normal   NodeReady                11m                kubelet          Node addons-718366 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep30 09:39] IPVS: rr: TCP 192.168.49.254:8443 - no destination available
	[Sep30 10:06] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70] <==
	{"level":"warn","ts":"2024-09-30T10:32:15.963806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"502.065606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T10:32:15.963874Z","caller":"traceutil/trace.go:171","msg":"trace[1639210312] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:392; }","duration":"502.169636ms","start":"2024-09-30T10:32:15.461690Z","end":"2024-09-30T10:32:15.963860Z","steps":["trace[1639210312] 'agreement among raft nodes before linearized reading'  (duration: 340.590554ms)","trace[1639210312] 'range keys from in-memory index tree'  (duration: 161.458364ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:15.963905Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T10:32:15.461681Z","time spent":"502.21679ms","remote":"127.0.0.1:47722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":0,"response size":29,"request content":"key:\"/registry/deployments/kube-system/registry\" "}
	{"level":"warn","ts":"2024-09-30T10:32:15.975672Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.962851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T10:32:15.975744Z","caller":"traceutil/trace.go:171","msg":"trace[1497020414] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:392; }","duration":"514.06067ms","start":"2024-09-30T10:32:15.461669Z","end":"2024-09-30T10:32:15.975729Z","steps":["trace[1497020414] 'agreement among raft nodes before linearized reading'  (duration: 340.620576ms)","trace[1497020414] 'range keys from in-memory index tree'  (duration: 173.328327ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:15.975777Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T10:32:15.461648Z","time spent":"514.122937ms","remote":"127.0.0.1:47722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":0,"response size":29,"request content":"key:\"/registry/deployments/default/cloud-spanner-emulator\" "}
	{"level":"warn","ts":"2024-09-30T10:32:15.976129Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"543.814465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-cx2x5\" ","response":"range_response_count:1 size:5102"}
	{"level":"info","ts":"2024-09-30T10:32:15.976175Z","caller":"traceutil/trace.go:171","msg":"trace[164340211] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-cx2x5; range_end:; response_count:1; response_revision:392; }","duration":"543.863112ms","start":"2024-09-30T10:32:15.432303Z","end":"2024-09-30T10:32:15.976166Z","steps":["trace[164340211] 'agreement among raft nodes before linearized reading'  (duration: 369.990963ms)","trace[164340211] 'range keys from in-memory index tree'  (duration: 173.801529ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:15.976202Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T10:32:15.432284Z","time spent":"543.912866ms","remote":"127.0.0.1:47438","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":5126,"request content":"key:\"/registry/pods/kube-system/kindnet-cx2x5\" "}
	{"level":"warn","ts":"2024-09-30T10:32:15.993928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.619432ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032241754536841 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-6d7ts.17f9ff067a572d5a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-6d7ts.17f9ff067a572d5a\" value_size:634 lease:8128032241754536491 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-30T10:32:16.005403Z","caller":"traceutil/trace.go:171","msg":"trace[221753668] linearizableReadLoop","detail":"{readStateIndex:402; appliedIndex:401; }","duration":"151.390807ms","start":"2024-09-30T10:32:15.853990Z","end":"2024-09-30T10:32:16.005381Z","steps":["trace[221753668] 'read index received'  (duration: 34.289µs)","trace[221753668] 'applied index is now lower than readState.Index'  (duration: 151.353163ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T10:32:16.025685Z","caller":"traceutil/trace.go:171","msg":"trace[1564697964] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"248.129407ms","start":"2024-09-30T10:32:15.777531Z","end":"2024-09-30T10:32:16.025660Z","steps":["trace[1564697964] 'process raft request'  (duration: 44.642585ms)","trace[1564697964] 'compare'  (duration: 50.754322ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:16.045362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.356368ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T10:32:16.056252Z","caller":"traceutil/trace.go:171","msg":"trace[667812772] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:393; }","duration":"202.239182ms","start":"2024-09-30T10:32:15.853984Z","end":"2024-09-30T10:32:16.056223Z","steps":["trace[667812772] 'agreement among raft nodes before linearized reading'  (duration: 191.330883ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.205755Z","caller":"traceutil/trace.go:171","msg":"trace[685107055] linearizableReadLoop","detail":"{readStateIndex:407; appliedIndex:402; }","duration":"111.750218ms","start":"2024-09-30T10:32:16.093990Z","end":"2024-09-30T10:32:16.205740Z","steps":["trace[685107055] 'read index received'  (duration: 110.159088ms)","trace[685107055] 'applied index is now lower than readState.Index'  (duration: 1.59063ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:16.205871Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.850417ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-30T10:32:16.205893Z","caller":"traceutil/trace.go:171","msg":"trace[1709110772] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:398; }","duration":"111.899655ms","start":"2024-09-30T10:32:16.093987Z","end":"2024-09-30T10:32:16.205887Z","steps":["trace[1709110772] 'agreement among raft nodes before linearized reading'  (duration: 111.814619ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.206103Z","caller":"traceutil/trace.go:171","msg":"trace[320854091] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"112.610299ms","start":"2024-09-30T10:32:16.093485Z","end":"2024-09-30T10:32:16.206096Z","steps":["trace[320854091] 'process raft request'  (duration: 112.105081ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.206227Z","caller":"traceutil/trace.go:171","msg":"trace[273332653] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"112.693685ms","start":"2024-09-30T10:32:16.093527Z","end":"2024-09-30T10:32:16.206221Z","steps":["trace[273332653] 'process raft request'  (duration: 112.132609ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.206323Z","caller":"traceutil/trace.go:171","msg":"trace[2053403231] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"112.580695ms","start":"2024-09-30T10:32:16.093736Z","end":"2024-09-30T10:32:16.206316Z","steps":["trace[2053403231] 'process raft request'  (duration: 111.949688ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.206367Z","caller":"traceutil/trace.go:171","msg":"trace[754986013] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"112.543945ms","start":"2024-09-30T10:32:16.093817Z","end":"2024-09-30T10:32:16.206361Z","steps":["trace[754986013] 'process raft request'  (duration: 111.897775ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.214557Z","caller":"traceutil/trace.go:171","msg":"trace[1253211950] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"121.318358ms","start":"2024-09-30T10:32:16.093218Z","end":"2024-09-30T10:32:16.214537Z","steps":["trace[1253211950] 'process raft request'  (duration: 110.795453ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:42:02.621285Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1521}
	{"level":"info","ts":"2024-09-30T10:42:02.650830Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1521,"took":"28.997753ms","hash":1350178088,"current-db-size-bytes":6029312,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3149824,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2024-09-30T10:42:02.650884Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1350178088,"revision":1521,"compact-revision":-1}
	
	
	==> gcp-auth [f7e963bb19262a09a16f93679318ed840adcaedcbd6e1425db0d952add42b6b6] <==
	2024/09/30 10:33:47 GCP Auth Webhook started!
	2024/09/30 10:34:50 Ready to marshal response ...
	2024/09/30 10:34:50 Ready to write response ...
	2024/09/30 10:34:50 Ready to marshal response ...
	2024/09/30 10:34:50 Ready to write response ...
	2024/09/30 10:34:50 Ready to marshal response ...
	2024/09/30 10:34:50 Ready to write response ...
	2024/09/30 10:42:54 Ready to marshal response ...
	2024/09/30 10:42:54 Ready to write response ...
	2024/09/30 10:42:54 Ready to marshal response ...
	2024/09/30 10:42:54 Ready to write response ...
	2024/09/30 10:42:54 Ready to marshal response ...
	2024/09/30 10:42:54 Ready to write response ...
	2024/09/30 10:43:04 Ready to marshal response ...
	2024/09/30 10:43:04 Ready to write response ...
	2024/09/30 10:43:15 Ready to marshal response ...
	2024/09/30 10:43:15 Ready to write response ...
	2024/09/30 10:43:36 Ready to marshal response ...
	2024/09/30 10:43:36 Ready to write response ...
	
	
	==> kernel <==
	 10:44:06 up 1 day, 10:26,  0 users,  load average: 1.04, 0.91, 1.60
	Linux addons-718366 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c] <==
	I0930 10:42:05.393655       1 main.go:299] handling current node
	I0930 10:42:15.387557       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:42:15.387674       1 main.go:299] handling current node
	I0930 10:42:25.386789       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:42:25.386822       1 main.go:299] handling current node
	I0930 10:42:35.393631       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:42:35.393667       1 main.go:299] handling current node
	I0930 10:42:45.387017       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:42:45.387065       1 main.go:299] handling current node
	I0930 10:42:55.386762       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:42:55.386801       1 main.go:299] handling current node
	I0930 10:43:05.387226       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:43:05.387257       1 main.go:299] handling current node
	I0930 10:43:15.386952       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:43:15.387100       1 main.go:299] handling current node
	I0930 10:43:25.389581       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:43:25.389618       1 main.go:299] handling current node
	I0930 10:43:35.389584       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:43:35.389715       1 main.go:299] handling current node
	I0930 10:43:45.388217       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:43:45.388320       1 main.go:299] handling current node
	I0930 10:43:55.386986       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:43:55.387022       1 main.go:299] handling current node
	I0930 10:44:05.386905       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:44:05.386971       1 main.go:299] handling current node
	
	
	==> kube-apiserver [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b] <==
	I0930 10:33:19.143564       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0930 10:34:15.883530       1 handler_proxy.go:99] no RequestInfo found in the context
	E0930 10:34:15.883698       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0930 10:34:15.885240       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.37.71:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.37.71:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.37.71:443: connect: connection refused" logger="UnhandledError"
	I0930 10:34:15.933537       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0930 10:34:15.944252       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0930 10:42:54.890805       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.140.135"}
	I0930 10:43:26.331792       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0930 10:43:44.099917       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0930 10:43:51.726819       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.727276       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:43:51.753391       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.753536       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:43:51.783338       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.783491       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:43:51.824695       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.824740       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:43:51.863148       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.863276       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0930 10:43:52.825369       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0930 10:43:52.863903       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0930 10:43:52.910514       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce] <==
	I0930 10:43:45.309536       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0930 10:43:45.395674       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-718366"
	I0930 10:43:51.869691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="8.41µs"
	E0930 10:43:52.827046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0930 10:43:52.866690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0930 10:43:52.913862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:43:54.040567       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:43:54.040613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:43:54.125734       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:43:54.125775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:43:54.375168       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:43:54.375208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:43:56.045860       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:43:56.045900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:43:56.234949       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:43:56.234993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:43:56.341612       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:43:56.341655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:44:01.271739       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:44:01.271783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:44:01.917501       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:44:01.918507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:44:02.531284       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:44:02.531325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:44:04.906359       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.302µs"
	
	
	==> kube-proxy [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6] <==
	I0930 10:32:17.422602       1 server_linux.go:66] "Using iptables proxy"
	I0930 10:32:18.022042       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0930 10:32:18.046247       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 10:32:18.422271       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0930 10:32:18.422424       1 server_linux.go:169] "Using iptables Proxier"
	I0930 10:32:18.435427       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 10:32:18.436261       1 server.go:483] "Version info" version="v1.31.1"
	I0930 10:32:18.436290       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:32:18.439117       1 config.go:199] "Starting service config controller"
	I0930 10:32:18.439168       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 10:32:18.439259       1 config.go:105] "Starting endpoint slice config controller"
	I0930 10:32:18.439272       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 10:32:18.439784       1 config.go:328] "Starting node config controller"
	I0930 10:32:18.439802       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 10:32:18.539827       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 10:32:18.539944       1 shared_informer.go:320] Caches are synced for service config
	I0930 10:32:18.539974       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf] <==
	I0930 10:32:05.100973       1 serving.go:386] Generated self-signed cert in-memory
	W0930 10:32:06.502704       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 10:32:06.502825       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 10:32:06.502860       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 10:32:06.502922       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 10:32:06.525286       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 10:32:06.527188       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:32:06.529906       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 10:32:06.530137       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 10:32:06.530163       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 10:32:06.530376       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0930 10:32:06.535468       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 10:32:06.535769       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0930 10:32:07.631231       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 10:43:53 addons-718366 kubelet[1518]: I0930 10:43:53.111846    1518 scope.go:117] "RemoveContainer" containerID="06864a52ffe89c9849c0e6b33f21367575f22d7ff8a7da4feb31fbc800b06edd"
	Sep 30 10:43:53 addons-718366 kubelet[1518]: I0930 10:43:53.135753    1518 scope.go:117] "RemoveContainer" containerID="8d4034dff65c3a1d3c93bfd7a90896bbd1f58f478e738c8c92b647420ceedd15"
	Sep 30 10:43:53 addons-718366 kubelet[1518]: I0930 10:43:53.858548    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00072f66-80b8-45a4-b940-6db1fba0c14b" path="/var/lib/kubelet/pods/00072f66-80b8-45a4-b940-6db1fba0c14b/volumes"
	Sep 30 10:43:53 addons-718366 kubelet[1518]: I0930 10:43:53.858895    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61b1df1-f9b8-4ed6-b8bb-30c16e9e1a30" path="/var/lib/kubelet/pods/e61b1df1-f9b8-4ed6-b8bb-30c16e9e1a30/volumes"
	Sep 30 10:43:54 addons-718366 kubelet[1518]: E0930 10:43:54.858871    1518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="df0189ce-cfa6-4fcb-9cb0-001e99817661"
	Sep 30 10:43:54 addons-718366 kubelet[1518]: E0930 10:43:54.858912    1518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="e066e902-ae7c-4eca-8494-b09d0ac67bce"
	Sep 30 10:43:58 addons-718366 kubelet[1518]: E0930 10:43:58.133734    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693038133460492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:507654,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 10:43:58 addons-718366 kubelet[1518]: E0930 10:43:58.133768    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693038133460492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:507654,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 10:44:00 addons-718366 kubelet[1518]: I0930 10:44:00.856575    1518 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-nxhd5" secret="" err="secret \"gcp-auth\" not found"
	Sep 30 10:44:04 addons-718366 kubelet[1518]: I0930 10:44:04.364387    1518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w62hs\" (UniqueName: \"kubernetes.io/projected/e066e902-ae7c-4eca-8494-b09d0ac67bce-kube-api-access-w62hs\") pod \"e066e902-ae7c-4eca-8494-b09d0ac67bce\" (UID: \"e066e902-ae7c-4eca-8494-b09d0ac67bce\") "
	Sep 30 10:44:04 addons-718366 kubelet[1518]: I0930 10:44:04.364436    1518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e066e902-ae7c-4eca-8494-b09d0ac67bce-gcp-creds\") pod \"e066e902-ae7c-4eca-8494-b09d0ac67bce\" (UID: \"e066e902-ae7c-4eca-8494-b09d0ac67bce\") "
	Sep 30 10:44:04 addons-718366 kubelet[1518]: I0930 10:44:04.364564    1518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e066e902-ae7c-4eca-8494-b09d0ac67bce-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "e066e902-ae7c-4eca-8494-b09d0ac67bce" (UID: "e066e902-ae7c-4eca-8494-b09d0ac67bce"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 30 10:44:04 addons-718366 kubelet[1518]: I0930 10:44:04.367034    1518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e066e902-ae7c-4eca-8494-b09d0ac67bce-kube-api-access-w62hs" (OuterVolumeSpecName: "kube-api-access-w62hs") pod "e066e902-ae7c-4eca-8494-b09d0ac67bce" (UID: "e066e902-ae7c-4eca-8494-b09d0ac67bce"). InnerVolumeSpecName "kube-api-access-w62hs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:44:04 addons-718366 kubelet[1518]: I0930 10:44:04.465209    1518 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w62hs\" (UniqueName: \"kubernetes.io/projected/e066e902-ae7c-4eca-8494-b09d0ac67bce-kube-api-access-w62hs\") on node \"addons-718366\" DevicePath \"\""
	Sep 30 10:44:04 addons-718366 kubelet[1518]: I0930 10:44:04.465248    1518 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e066e902-ae7c-4eca-8494-b09d0ac67bce-gcp-creds\") on node \"addons-718366\" DevicePath \"\""
	Sep 30 10:44:05 addons-718366 kubelet[1518]: I0930 10:44:05.192516    1518 scope.go:117] "RemoveContainer" containerID="27cb02aeefc4fac55d5bcd5375677ec070b3092fdf6ec5f9485e548b2b33c68a"
	Sep 30 10:44:05 addons-718366 kubelet[1518]: I0930 10:44:05.272070    1518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cccx7\" (UniqueName: \"kubernetes.io/projected/a2779ea5-90ce-41c6-800a-4fd0e62455e1-kube-api-access-cccx7\") pod \"a2779ea5-90ce-41c6-800a-4fd0e62455e1\" (UID: \"a2779ea5-90ce-41c6-800a-4fd0e62455e1\") "
	Sep 30 10:44:05 addons-718366 kubelet[1518]: I0930 10:44:05.272131    1518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hjw7\" (UniqueName: \"kubernetes.io/projected/78962db4-c230-431b-b141-405fd6389146-kube-api-access-5hjw7\") pod \"78962db4-c230-431b-b141-405fd6389146\" (UID: \"78962db4-c230-431b-b141-405fd6389146\") "
	Sep 30 10:44:05 addons-718366 kubelet[1518]: I0930 10:44:05.274327    1518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2779ea5-90ce-41c6-800a-4fd0e62455e1-kube-api-access-cccx7" (OuterVolumeSpecName: "kube-api-access-cccx7") pod "a2779ea5-90ce-41c6-800a-4fd0e62455e1" (UID: "a2779ea5-90ce-41c6-800a-4fd0e62455e1"). InnerVolumeSpecName "kube-api-access-cccx7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:44:05 addons-718366 kubelet[1518]: I0930 10:44:05.275629    1518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78962db4-c230-431b-b141-405fd6389146-kube-api-access-5hjw7" (OuterVolumeSpecName: "kube-api-access-5hjw7") pod "78962db4-c230-431b-b141-405fd6389146" (UID: "78962db4-c230-431b-b141-405fd6389146"). InnerVolumeSpecName "kube-api-access-5hjw7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:44:05 addons-718366 kubelet[1518]: I0930 10:44:05.372779    1518 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cccx7\" (UniqueName: \"kubernetes.io/projected/a2779ea5-90ce-41c6-800a-4fd0e62455e1-kube-api-access-cccx7\") on node \"addons-718366\" DevicePath \"\""
	Sep 30 10:44:05 addons-718366 kubelet[1518]: I0930 10:44:05.372822    1518 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5hjw7\" (UniqueName: \"kubernetes.io/projected/78962db4-c230-431b-b141-405fd6389146-kube-api-access-5hjw7\") on node \"addons-718366\" DevicePath \"\""
	Sep 30 10:44:05 addons-718366 kubelet[1518]: E0930 10:44:05.858525    1518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="df0189ce-cfa6-4fcb-9cb0-001e99817661"
	Sep 30 10:44:05 addons-718366 kubelet[1518]: I0930 10:44:05.859574    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e066e902-ae7c-4eca-8494-b09d0ac67bce" path="/var/lib/kubelet/pods/e066e902-ae7c-4eca-8494-b09d0ac67bce/volumes"
	Sep 30 10:44:06 addons-718366 kubelet[1518]: I0930 10:44:06.200553    1518 scope.go:117] "RemoveContainer" containerID="68aaee132aa27317dc65602685ff7bc3e72debee409f831a9a004bb03c6bc64e"
	
	
	==> storage-provisioner [8564280a03e37716b0a9e9a9f7d87bbde241c67a46dcec2bb762772d073dec52] <==
	I0930 10:32:56.545004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 10:32:56.563543       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 10:32:56.563600       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 10:32:56.576241       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 10:32:56.576984       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-718366_ea9e4a9f-f89a-497b-a662-d047c0307409!
	I0930 10:32:56.576481       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a56189b9-c62a-4b37-a064-2fefbb3251ee", APIVersion:"v1", ResourceVersion:"910", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-718366_ea9e4a9f-f89a-497b-a662-d047c0307409 became leader
	I0930 10:32:56.677903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-718366_ea9e4a9f-f89a-497b-a662-d047c0307409!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-718366 -n addons-718366
helpers_test.go:261: (dbg) Run:  kubectl --context addons-718366 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-m827b ingress-nginx-admission-patch-t2fbr
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-718366 describe pod busybox ingress-nginx-admission-create-m827b ingress-nginx-admission-patch-t2fbr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-718366 describe pod busybox ingress-nginx-admission-create-m827b ingress-nginx-admission-patch-t2fbr: exit status 1 (110.791154ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-718366/192.168.49.2
	Start Time:       Mon, 30 Sep 2024 10:34:50 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q78z7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-q78z7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-718366
	  Normal   Pulling    7m50s (x4 over 9m16s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m49s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m49s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m38s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-m827b" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-t2fbr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-718366 describe pod busybox ingress-nginx-admission-create-m827b ingress-nginx-admission-patch-t2fbr: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (151.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-718366 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-718366 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-718366 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5d8db117-a456-42be-95c3-132da5942e0c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5d8db117-a456-42be-95c3-132da5942e0c] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003314391s
I0930 10:44:28.228755  575428 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718366 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.507099168s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-718366 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-718366 addons disable ingress-dns --alsologtostderr -v=1: (1.106846574s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-718366 addons disable ingress --alsologtostderr -v=1: (7.764509253s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-718366
helpers_test.go:235: (dbg) docker inspect addons-718366:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894",
	        "Created": "2024-09-30T10:31:43.905448896Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 576683,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-30T10:31:44.063796451Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894/hosts",
	        "LogPath": "/var/lib/docker/containers/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894-json.log",
	        "Name": "/addons-718366",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-718366:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-718366",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b0a01dc9769f647e2fa9445d5bf4e9ab2e1115d9e0e5acff67a45091631ce64-init/diff:/var/lib/docker/overlay2/89114fb86e05dfc705528dc965d39dcbdae2b3c32ee9939bb163740716767303/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b0a01dc9769f647e2fa9445d5bf4e9ab2e1115d9e0e5acff67a45091631ce64/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b0a01dc9769f647e2fa9445d5bf4e9ab2e1115d9e0e5acff67a45091631ce64/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b0a01dc9769f647e2fa9445d5bf4e9ab2e1115d9e0e5acff67a45091631ce64/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-718366",
	                "Source": "/var/lib/docker/volumes/addons-718366/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-718366",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-718366",
	                "name.minikube.sigs.k8s.io": "addons-718366",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4000b12ceab08239f17e20c17eb46f041a0a6e684a414119cdec0d3429928e0b",
	            "SandboxKey": "/var/run/docker/netns/4000b12ceab0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38988"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38989"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38990"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38991"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-718366": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "49bb2287327a5d5bf19993c7fe6d9348c5cc91efc29c195f3a50d6290c89924e",
	                    "EndpointID": "a3d75320f00be0ed0cbab5bc16e3263619548cfeae3e76a58471414489bf0190",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-718366",
	                        "ed341e1151f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-718366 -n addons-718366
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-718366 logs -n 25: (1.471777931s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| delete  | -p download-only-032798              | download-only-032798   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| start   | -o=json --download-only              | download-only-575153   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | -p download-only-575153              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| delete  | -p download-only-575153              | download-only-575153   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| delete  | -p download-only-032798              | download-only-032798   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| delete  | -p download-only-575153              | download-only-575153   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| start   | --download-only -p                   | download-docker-121895 | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | download-docker-121895               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-121895            | download-docker-121895 | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| start   | --download-only -p                   | binary-mirror-919874   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | binary-mirror-919874                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44655               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-919874              | binary-mirror-919874   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| addons  | enable dashboard -p                  | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | addons-718366                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | addons-718366                        |                        |         |         |                     |                     |
	| start   | -p addons-718366 --wait=true         | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:34 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:42 UTC | 30 Sep 24 10:42 UTC |
	|         | -p addons-718366                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-718366 addons disable         | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:43 UTC | 30 Sep 24 10:43 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-718366 addons                 | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:43 UTC | 30 Sep 24 10:43 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-718366 addons                 | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:43 UTC | 30 Sep 24 10:43 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-718366 ip                     | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:44 UTC | 30 Sep 24 10:44 UTC |
	| addons  | addons-718366 addons disable         | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:44 UTC | 30 Sep 24 10:44 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:44 UTC | 30 Sep 24 10:44 UTC |
	|         | addons-718366                        |                        |         |         |                     |                     |
	| ssh     | addons-718366 ssh curl -s            | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:44 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-718366 ip                     | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:46 UTC | 30 Sep 24 10:46 UTC |
	| addons  | addons-718366 addons disable         | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:46 UTC | 30 Sep 24 10:46 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-718366 addons disable         | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:46 UTC | 30 Sep 24 10:46 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:31:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:31:19.588253  576188 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:31:19.588435  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:31:19.588464  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:31:19.588483  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:31:19.588757  576188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
	I0930 10:31:19.589326  576188 out.go:352] Setting JSON to false
	I0930 10:31:19.590293  576188 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":123226,"bootTime":1727569054,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0930 10:31:19.590400  576188 start.go:139] virtualization:  
	I0930 10:31:19.592475  576188 out.go:177] * [addons-718366] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 10:31:19.593683  576188 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:31:19.593737  576188 notify.go:220] Checking for updates...
	I0930 10:31:19.596014  576188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:31:19.597688  576188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	I0930 10:31:19.598789  576188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	I0930 10:31:19.600169  576188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 10:31:19.601274  576188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:31:19.602931  576188 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:31:19.624953  576188 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:31:19.625081  576188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:31:19.686322  576188 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-30 10:31:19.676149404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:31:19.686454  576188 docker.go:318] overlay module found
	I0930 10:31:19.688493  576188 out.go:177] * Using the docker driver based on user configuration
	I0930 10:31:19.689696  576188 start.go:297] selected driver: docker
	I0930 10:31:19.689712  576188 start.go:901] validating driver "docker" against <nil>
	I0930 10:31:19.689727  576188 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:31:19.690364  576188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:31:19.737739  576188 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-30 10:31:19.72812774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:31:19.737977  576188 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:31:19.738212  576188 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:31:19.739656  576188 out.go:177] * Using Docker driver with root privileges
	I0930 10:31:19.740990  576188 cni.go:84] Creating CNI manager for ""
	I0930 10:31:19.741052  576188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0930 10:31:19.741072  576188 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 10:31:19.741162  576188 start.go:340] cluster config:
	{Name:addons-718366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:31:19.743023  576188 out.go:177] * Starting "addons-718366" primary control-plane node in "addons-718366" cluster
	I0930 10:31:19.743990  576188 cache.go:121] Beginning downloading kic base image for docker with crio
	I0930 10:31:19.745206  576188 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0930 10:31:19.746898  576188 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 10:31:19.746949  576188 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0930 10:31:19.746962  576188 cache.go:56] Caching tarball of preloaded images
	I0930 10:31:19.747074  576188 preload.go:172] Found /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0930 10:31:19.747089  576188 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 10:31:19.747446  576188 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/config.json ...
	I0930 10:31:19.747510  576188 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0930 10:31:19.747474  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/config.json: {Name:mk2af656d2be7cf8581e9e41a4766db590e98cab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:19.763017  576188 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0930 10:31:19.763137  576188 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0930 10:31:19.763167  576188 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0930 10:31:19.763175  576188 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0930 10:31:19.763182  576188 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0930 10:31:19.763188  576188 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0930 10:31:36.606388  576188 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0930 10:31:36.606431  576188 cache.go:194] Successfully downloaded all kic artifacts
	I0930 10:31:36.606473  576188 start.go:360] acquireMachinesLock for addons-718366: {Name:mkcc9f52048bcb539eb2c19ba8edac315f37b684 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 10:31:36.606610  576188 start.go:364] duration metric: took 113.425µs to acquireMachinesLock for "addons-718366"
	I0930 10:31:36.606640  576188 start.go:93] Provisioning new machine with config: &{Name:addons-718366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 10:31:36.606722  576188 start.go:125] createHost starting for "" (driver="docker")
	I0930 10:31:36.609505  576188 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0930 10:31:36.609800  576188 start.go:159] libmachine.API.Create for "addons-718366" (driver="docker")
	I0930 10:31:36.609842  576188 client.go:168] LocalClient.Create starting
	I0930 10:31:36.609960  576188 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem
	I0930 10:31:36.990982  576188 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/cert.pem
	I0930 10:31:37.632250  576188 cli_runner.go:164] Run: docker network inspect addons-718366 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0930 10:31:37.647997  576188 cli_runner.go:211] docker network inspect addons-718366 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0930 10:31:37.648087  576188 network_create.go:284] running [docker network inspect addons-718366] to gather additional debugging logs...
	I0930 10:31:37.648108  576188 cli_runner.go:164] Run: docker network inspect addons-718366
	W0930 10:31:37.666472  576188 cli_runner.go:211] docker network inspect addons-718366 returned with exit code 1
	I0930 10:31:37.666507  576188 network_create.go:287] error running [docker network inspect addons-718366]: docker network inspect addons-718366: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-718366 not found
	I0930 10:31:37.666521  576188 network_create.go:289] output of [docker network inspect addons-718366]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-718366 not found
	
	** /stderr **
	I0930 10:31:37.666652  576188 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 10:31:37.682855  576188 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017b0f20}
	I0930 10:31:37.682901  576188 network_create.go:124] attempt to create docker network addons-718366 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0930 10:31:37.682963  576188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-718366 addons-718366
	I0930 10:31:37.753006  576188 network_create.go:108] docker network addons-718366 192.168.49.0/24 created
	I0930 10:31:37.753040  576188 kic.go:121] calculated static IP "192.168.49.2" for the "addons-718366" container
	I0930 10:31:37.753117  576188 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0930 10:31:37.768087  576188 cli_runner.go:164] Run: docker volume create addons-718366 --label name.minikube.sigs.k8s.io=addons-718366 --label created_by.minikube.sigs.k8s.io=true
	I0930 10:31:37.784157  576188 oci.go:103] Successfully created a docker volume addons-718366
	I0930 10:31:37.784245  576188 cli_runner.go:164] Run: docker run --rm --name addons-718366-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718366 --entrypoint /usr/bin/test -v addons-718366:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0930 10:31:39.859396  576188 cli_runner.go:217] Completed: docker run --rm --name addons-718366-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718366 --entrypoint /usr/bin/test -v addons-718366:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (2.075110378s)
	I0930 10:31:39.859424  576188 oci.go:107] Successfully prepared a docker volume addons-718366
	I0930 10:31:39.859448  576188 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 10:31:39.859467  576188 kic.go:194] Starting extracting preloaded images to volume ...
	I0930 10:31:39.859530  576188 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-718366:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0930 10:31:43.835757  576188 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-718366:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (3.97617046s)
	I0930 10:31:43.835789  576188 kic.go:203] duration metric: took 3.976319306s to extract preloaded images to volume ...
	W0930 10:31:43.835943  576188 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0930 10:31:43.836061  576188 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0930 10:31:43.891196  576188 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-718366 --name addons-718366 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718366 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-718366 --network addons-718366 --ip 192.168.49.2 --volume addons-718366:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0930 10:31:44.248245  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Running}}
	I0930 10:31:44.274600  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:31:44.306529  576188 cli_runner.go:164] Run: docker exec addons-718366 stat /var/lib/dpkg/alternatives/iptables
	I0930 10:31:44.359444  576188 oci.go:144] the created container "addons-718366" has a running status.
	I0930 10:31:44.359471  576188 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa...
	I0930 10:31:44.997180  576188 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0930 10:31:45.033020  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:31:45.054795  576188 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0930 10:31:45.054823  576188 kic_runner.go:114] Args: [docker exec --privileged addons-718366 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0930 10:31:45.150433  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:31:45.178099  576188 machine.go:93] provisionDockerMachine start ...
	I0930 10:31:45.178219  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:45.203008  576188 main.go:141] libmachine: Using SSH client type: native
	I0930 10:31:45.203294  576188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 38988 <nil> <nil>}
	I0930 10:31:45.203305  576188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 10:31:45.341698  576188 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-718366
	
	I0930 10:31:45.341727  576188 ubuntu.go:169] provisioning hostname "addons-718366"
	I0930 10:31:45.341795  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:45.364079  576188 main.go:141] libmachine: Using SSH client type: native
	I0930 10:31:45.364321  576188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 38988 <nil> <nil>}
	I0930 10:31:45.364339  576188 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-718366 && echo "addons-718366" | sudo tee /etc/hostname
	I0930 10:31:45.513605  576188 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-718366
	
	I0930 10:31:45.513697  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:45.531270  576188 main.go:141] libmachine: Using SSH client type: native
	I0930 10:31:45.531519  576188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 38988 <nil> <nil>}
	I0930 10:31:45.531542  576188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-718366' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-718366/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-718366' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 10:31:45.657393  576188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 10:31:45.657421  576188 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19734-570035/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-570035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-570035/.minikube}
	I0930 10:31:45.657449  576188 ubuntu.go:177] setting up certificates
	I0930 10:31:45.657461  576188 provision.go:84] configureAuth start
	I0930 10:31:45.657532  576188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718366
	I0930 10:31:45.674066  576188 provision.go:143] copyHostCerts
	I0930 10:31:45.674149  576188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-570035/.minikube/ca.pem (1078 bytes)
	I0930 10:31:45.674271  576188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-570035/.minikube/cert.pem (1123 bytes)
	I0930 10:31:45.674342  576188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-570035/.minikube/key.pem (1679 bytes)
	I0930 10:31:45.674396  576188 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-570035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca-key.pem org=jenkins.addons-718366 san=[127.0.0.1 192.168.49.2 addons-718366 localhost minikube]
	I0930 10:31:45.981328  576188 provision.go:177] copyRemoteCerts
	I0930 10:31:45.981423  576188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 10:31:45.981472  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:45.997951  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.090693  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 10:31:46.116251  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 10:31:46.141025  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0930 10:31:46.166818  576188 provision.go:87] duration metric: took 509.328593ms to configureAuth
	I0930 10:31:46.166888  576188 ubuntu.go:193] setting minikube options for container-runtime
	I0930 10:31:46.167109  576188 config.go:182] Loaded profile config "addons-718366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:31:46.167220  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.183793  576188 main.go:141] libmachine: Using SSH client type: native
	I0930 10:31:46.184047  576188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 38988 <nil> <nil>}
	I0930 10:31:46.184069  576188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 10:31:46.414611  576188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 10:31:46.414638  576188 machine.go:96] duration metric: took 1.236519349s to provisionDockerMachine
	I0930 10:31:46.414654  576188 client.go:171] duration metric: took 9.804797803s to LocalClient.Create
	I0930 10:31:46.414708  576188 start.go:167] duration metric: took 9.804909414s to libmachine.API.Create "addons-718366"
	I0930 10:31:46.414724  576188 start.go:293] postStartSetup for "addons-718366" (driver="docker")
	I0930 10:31:46.414735  576188 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 10:31:46.414836  576188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 10:31:46.414922  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.432825  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.526839  576188 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 10:31:46.529986  576188 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0930 10:31:46.530020  576188 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0930 10:31:46.530031  576188 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0930 10:31:46.530038  576188 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0930 10:31:46.530053  576188 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-570035/.minikube/addons for local assets ...
	I0930 10:31:46.530129  576188 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-570035/.minikube/files for local assets ...
	I0930 10:31:46.530155  576188 start.go:296] duration metric: took 115.424998ms for postStartSetup
	I0930 10:31:46.530481  576188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718366
	I0930 10:31:46.546445  576188 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/config.json ...
	I0930 10:31:46.546743  576188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:31:46.546793  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.563100  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.658380  576188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0930 10:31:46.662859  576188 start.go:128] duration metric: took 10.056121452s to createHost
	I0930 10:31:46.662883  576188 start.go:83] releasing machines lock for "addons-718366", held for 10.056259303s
	I0930 10:31:46.662953  576188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718366
	I0930 10:31:46.679358  576188 ssh_runner.go:195] Run: cat /version.json
	I0930 10:31:46.679415  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.679741  576188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 10:31:46.679803  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.704694  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.707977  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.917060  576188 ssh_runner.go:195] Run: systemctl --version
	I0930 10:31:46.921195  576188 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 10:31:47.061112  576188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0930 10:31:47.065232  576188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 10:31:47.086297  576188 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0930 10:31:47.086388  576188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 10:31:47.121211  576188 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0930 10:31:47.121240  576188 start.go:495] detecting cgroup driver to use...
	I0930 10:31:47.121275  576188 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0930 10:31:47.121327  576188 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 10:31:47.138863  576188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 10:31:47.150816  576188 docker.go:217] disabling cri-docker service (if available) ...
	I0930 10:31:47.150879  576188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 10:31:47.165652  576188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 10:31:47.179926  576188 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 10:31:47.273399  576188 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 10:31:47.363581  576188 docker.go:233] disabling docker service ...
	I0930 10:31:47.363669  576188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 10:31:47.383649  576188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 10:31:47.396300  576188 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 10:31:47.479534  576188 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 10:31:47.578817  576188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 10:31:47.590693  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 10:31:47.606912  576188 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 10:31:47.606982  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.616770  576188 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 10:31:47.616838  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.626842  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.636932  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.646765  576188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 10:31:47.655795  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.665503  576188 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.681353  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.691540  576188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 10:31:47.700478  576188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 10:31:47.709442  576188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:31:47.791594  576188 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 10:31:47.910242  576188 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 10:31:47.910380  576188 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 10:31:47.913887  576188 start.go:563] Will wait 60s for crictl version
	I0930 10:31:47.913948  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:31:47.917201  576188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 10:31:47.956213  576188 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0930 10:31:47.956327  576188 ssh_runner.go:195] Run: crio --version
	I0930 10:31:47.995739  576188 ssh_runner.go:195] Run: crio --version
	I0930 10:31:48.038600  576188 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0930 10:31:48.040972  576188 cli_runner.go:164] Run: docker network inspect addons-718366 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 10:31:48.059448  576188 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0930 10:31:48.063378  576188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:31:48.074967  576188 kubeadm.go:883] updating cluster {Name:addons-718366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 10:31:48.075101  576188 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 10:31:48.075164  576188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 10:31:48.152821  576188 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 10:31:48.152846  576188 crio.go:433] Images already preloaded, skipping extraction
	I0930 10:31:48.152903  576188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 10:31:48.188287  576188 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 10:31:48.188312  576188 cache_images.go:84] Images are preloaded, skipping loading
	I0930 10:31:48.188323  576188 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0930 10:31:48.188415  576188 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-718366 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 10:31:48.188496  576188 ssh_runner.go:195] Run: crio config
	I0930 10:31:48.238352  576188 cni.go:84] Creating CNI manager for ""
	I0930 10:31:48.238376  576188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0930 10:31:48.238386  576188 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 10:31:48.238408  576188 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-718366 NodeName:addons-718366 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 10:31:48.238553  576188 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-718366"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 10:31:48.238630  576188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 10:31:48.247791  576188 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 10:31:48.247902  576188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 10:31:48.256589  576188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0930 10:31:48.274946  576188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 10:31:48.293776  576188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0930 10:31:48.312418  576188 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0930 10:31:48.315789  576188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:31:48.326439  576188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:31:48.407610  576188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:31:48.421862  576188 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366 for IP: 192.168.49.2
	I0930 10:31:48.421936  576188 certs.go:194] generating shared ca certs ...
	I0930 10:31:48.421965  576188 certs.go:226] acquiring lock for ca certs: {Name:mk1a6e0acac4c352dd045fb15e8f16e43e290be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:48.422139  576188 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-570035/.minikube/ca.key
	I0930 10:31:48.852559  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt ...
	I0930 10:31:48.852592  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt: {Name:mkf151645d175ccb0b3534f7f3a47f78c7b74bd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:48.852823  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/ca.key ...
	I0930 10:31:48.852839  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/ca.key: {Name:mk253c50c9e044c6b24426ba126fc768ae2c086d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:48.852936  576188 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.key
	I0930 10:31:49.127433  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.crt ...
	I0930 10:31:49.127472  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.crt: {Name:mk3c5c40e5e854bce5292f6c8b72b378b70a89ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:49.127671  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.key ...
	I0930 10:31:49.127693  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.key: {Name:mkccb69636b16c12bfb67aee8a9ccc8fbc4adc20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:49.127784  576188 certs.go:256] generating profile certs ...
	I0930 10:31:49.127846  576188 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.key
	I0930 10:31:49.127867  576188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt with IP's: []
	I0930 10:31:49.435254  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt ...
	I0930 10:31:49.435286  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: {Name:mkb5471f9020f84972ffa54ded95d7795d2a1016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:49.435477  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.key ...
	I0930 10:31:49.435489  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.key: {Name:mk3319c7a4b7aa7eacc7a275bdff66d1921999a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:49.435574  576188 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key.484276da
	I0930 10:31:49.435592  576188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt.484276da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0930 10:31:50.182674  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt.484276da ...
	I0930 10:31:50.182710  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt.484276da: {Name:mk6507e673c5274a73199d398bdbaf9b2d7b6554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:50.182907  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key.484276da ...
	I0930 10:31:50.182921  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key.484276da: {Name:mk737ffdf84242931763a97a2893d5f88d102eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:50.183007  576188 certs.go:381] copying /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt.484276da -> /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt
	I0930 10:31:50.183084  576188 certs.go:385] copying /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key.484276da -> /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key
	I0930 10:31:50.183135  576188 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.key
	I0930 10:31:50.183156  576188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.crt with IP's: []
	I0930 10:31:50.657677  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.crt ...
	I0930 10:31:50.657708  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.crt: {Name:mkddac17456589328bd0297cfc529913e40d6096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:50.657893  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.key ...
	I0930 10:31:50.657907  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.key: {Name:mk1da3d7241ee96e850a287589cbd33941beaf05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:50.659767  576188 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 10:31:50.659810  576188 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem (1078 bytes)
	I0930 10:31:50.659833  576188 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/cert.pem (1123 bytes)
	I0930 10:31:50.659862  576188 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/key.pem (1679 bytes)
	I0930 10:31:50.660447  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 10:31:50.684494  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 10:31:50.708442  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 10:31:50.732440  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0930 10:31:50.756657  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0930 10:31:50.780179  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 10:31:50.804081  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 10:31:50.832833  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 10:31:50.870081  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 10:31:50.894487  576188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 10:31:50.911847  576188 ssh_runner.go:195] Run: openssl version
	I0930 10:31:50.917167  576188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 10:31:50.926449  576188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:31:50.929974  576188 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:31 /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:31:50.930037  576188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:31:50.936865  576188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 10:31:50.946146  576188 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 10:31:50.949263  576188 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 10:31:50.949326  576188 kubeadm.go:392] StartCluster: {Name:addons-718366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:31:50.949411  576188 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 10:31:50.949469  576188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 10:31:50.986405  576188 cri.go:89] found id: ""
	I0930 10:31:50.986521  576188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 10:31:50.995471  576188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 10:31:51.005070  576188 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0930 10:31:51.005164  576188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 10:31:51.014498  576188 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 10:31:51.014517  576188 kubeadm.go:157] found existing configuration files:
	
	I0930 10:31:51.014593  576188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 10:31:51.023579  576188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 10:31:51.023670  576188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 10:31:51.032109  576188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 10:31:51.040792  576188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 10:31:51.040883  576188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 10:31:51.049272  576188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 10:31:51.058271  576188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 10:31:51.058357  576188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 10:31:51.067199  576188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 10:31:51.075621  576188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 10:31:51.075693  576188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 10:31:51.083850  576188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0930 10:31:51.127566  576188 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 10:31:51.127636  576188 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 10:31:51.147314  576188 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0930 10:31:51.147389  576188 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0930 10:31:51.147428  576188 kubeadm.go:310] OS: Linux
	I0930 10:31:51.147478  576188 kubeadm.go:310] CGROUPS_CPU: enabled
	I0930 10:31:51.147529  576188 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0930 10:31:51.147580  576188 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0930 10:31:51.147630  576188 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0930 10:31:51.147689  576188 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0930 10:31:51.147743  576188 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0930 10:31:51.147792  576188 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0930 10:31:51.147843  576188 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0930 10:31:51.147891  576188 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0930 10:31:51.211072  576188 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 10:31:51.211220  576188 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 10:31:51.211322  576188 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 10:31:51.217978  576188 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 10:31:51.222074  576188 out.go:235]   - Generating certificates and keys ...
	I0930 10:31:51.222200  576188 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 10:31:51.222290  576188 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 10:31:51.507541  576188 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 10:31:52.100429  576188 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 10:31:52.343512  576188 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 10:31:53.350821  576188 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 10:31:54.127332  576188 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 10:31:54.127730  576188 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-718366 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0930 10:31:55.090224  576188 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 10:31:55.090597  576188 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-718366 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0930 10:31:55.557333  576188 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 10:31:56.433561  576188 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 10:31:57.360076  576188 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 10:31:57.360372  576188 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 10:31:57.616865  576188 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 10:31:58.166068  576188 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 10:31:58.642711  576188 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 10:31:59.408755  576188 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 10:31:59.928063  576188 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 10:31:59.928676  576188 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 10:31:59.931546  576188 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 10:31:59.934534  576188 out.go:235]   - Booting up control plane ...
	I0930 10:31:59.934632  576188 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 10:31:59.934707  576188 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 10:31:59.934773  576188 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 10:31:59.943378  576188 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 10:31:59.949241  576188 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 10:31:59.949518  576188 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 10:32:00.105875  576188 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 10:32:00.106001  576188 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 10:32:01.107740  576188 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001980346s
	I0930 10:32:01.107838  576188 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 10:32:07.109888  576188 kubeadm.go:310] [api-check] The API server is healthy after 6.002182723s
	I0930 10:32:07.131339  576188 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 10:32:07.151401  576188 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 10:32:07.177130  576188 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 10:32:07.177349  576188 kubeadm.go:310] [mark-control-plane] Marking the node addons-718366 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 10:32:07.188510  576188 kubeadm.go:310] [bootstrap-token] Using token: 8aonc1.ekajo8hgoq6vth44
	I0930 10:32:07.193078  576188 out.go:235]   - Configuring RBAC rules ...
	I0930 10:32:07.193212  576188 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 10:32:07.195793  576188 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 10:32:07.203953  576188 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 10:32:07.207903  576188 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 10:32:07.211613  576188 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 10:32:07.218369  576188 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 10:32:07.519705  576188 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 10:32:07.953415  576188 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 10:32:08.516178  576188 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 10:32:08.517416  576188 kubeadm.go:310] 
	I0930 10:32:08.517508  576188 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 10:32:08.517531  576188 kubeadm.go:310] 
	I0930 10:32:08.517630  576188 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 10:32:08.517641  576188 kubeadm.go:310] 
	I0930 10:32:08.517681  576188 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 10:32:08.517745  576188 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 10:32:08.517806  576188 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 10:32:08.517818  576188 kubeadm.go:310] 
	I0930 10:32:08.517880  576188 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 10:32:08.517888  576188 kubeadm.go:310] 
	I0930 10:32:08.517935  576188 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 10:32:08.517940  576188 kubeadm.go:310] 
	I0930 10:32:08.517992  576188 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 10:32:08.518066  576188 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 10:32:08.518134  576188 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 10:32:08.518138  576188 kubeadm.go:310] 
	I0930 10:32:08.518221  576188 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 10:32:08.518298  576188 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 10:32:08.518302  576188 kubeadm.go:310] 
	I0930 10:32:08.518385  576188 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8aonc1.ekajo8hgoq6vth44 \
	I0930 10:32:08.518487  576188 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:34f1ba6de874bd896834dc114ac775d877f5b795b01506ad8bb22dc9b74f70da \
	I0930 10:32:08.518508  576188 kubeadm.go:310] 	--control-plane 
	I0930 10:32:08.518513  576188 kubeadm.go:310] 
	I0930 10:32:08.518603  576188 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 10:32:08.518608  576188 kubeadm.go:310] 
	I0930 10:32:08.518690  576188 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8aonc1.ekajo8hgoq6vth44 \
	I0930 10:32:08.518791  576188 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:34f1ba6de874bd896834dc114ac775d877f5b795b01506ad8bb22dc9b74f70da 
	I0930 10:32:08.522706  576188 kubeadm.go:310] W0930 10:31:51.124221    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:32:08.523011  576188 kubeadm.go:310] W0930 10:31:51.125105    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:32:08.523230  576188 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0930 10:32:08.523336  576188 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 10:32:08.523356  576188 cni.go:84] Creating CNI manager for ""
	I0930 10:32:08.523365  576188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0930 10:32:08.526350  576188 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0930 10:32:08.528840  576188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0930 10:32:08.532638  576188 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0930 10:32:08.532658  576188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0930 10:32:08.550943  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0930 10:32:08.822890  576188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 10:32:08.823054  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:08.823069  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-718366 minikube.k8s.io/updated_at=2024_09_30T10_32_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=addons-718366 minikube.k8s.io/primary=true
	I0930 10:32:08.983346  576188 ops.go:34] apiserver oom_adj: -16
	I0930 10:32:08.998983  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:09.500016  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:09.999359  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:10.499482  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:10.999362  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:11.499443  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:11.999113  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:12.500484  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:12.616697  576188 kubeadm.go:1113] duration metric: took 3.793709432s to wait for elevateKubeSystemPrivileges
	I0930 10:32:12.616732  576188 kubeadm.go:394] duration metric: took 21.667424713s to StartCluster
	I0930 10:32:12.616750  576188 settings.go:142] acquiring lock: {Name:mk11436cfb74a22d5df272d0ed716a2f4f11abe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:32:12.616873  576188 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-570035/kubeconfig
	I0930 10:32:12.617251  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/kubeconfig: {Name:mk2b4dce89b9a4c7357cab4707a99982ddc5b94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:32:12.617445  576188 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 10:32:12.617597  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 10:32:12.617836  576188 config.go:182] Loaded profile config "addons-718366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:32:12.617874  576188 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0930 10:32:12.617960  576188 addons.go:69] Setting yakd=true in profile "addons-718366"
	I0930 10:32:12.617979  576188 addons.go:234] Setting addon yakd=true in "addons-718366"
	I0930 10:32:12.618003  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.618496  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.618987  576188 addons.go:69] Setting inspektor-gadget=true in profile "addons-718366"
	I0930 10:32:12.619028  576188 addons.go:234] Setting addon inspektor-gadget=true in "addons-718366"
	I0930 10:32:12.619066  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.619563  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.619718  576188 addons.go:69] Setting metrics-server=true in profile "addons-718366"
	I0930 10:32:12.619732  576188 addons.go:234] Setting addon metrics-server=true in "addons-718366"
	I0930 10:32:12.619755  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.620173  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.620821  576188 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-718366"
	I0930 10:32:12.620870  576188 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-718366"
	I0930 10:32:12.620910  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.621401  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.627312  576188 addons.go:69] Setting registry=true in profile "addons-718366"
	I0930 10:32:12.627345  576188 addons.go:234] Setting addon registry=true in "addons-718366"
	I0930 10:32:12.627389  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.627879  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629127  576188 addons.go:69] Setting cloud-spanner=true in profile "addons-718366"
	I0930 10:32:12.629593  576188 addons.go:234] Setting addon cloud-spanner=true in "addons-718366"
	I0930 10:32:12.629630  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.630378  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629311  576188 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-718366"
	I0930 10:32:12.634602  576188 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-718366"
	I0930 10:32:12.634666  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.629324  576188 addons.go:69] Setting default-storageclass=true in profile "addons-718366"
	I0930 10:32:12.637049  576188 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-718366"
	I0930 10:32:12.637348  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.642682  576188 addons.go:69] Setting storage-provisioner=true in profile "addons-718366"
	I0930 10:32:12.642716  576188 addons.go:234] Setting addon storage-provisioner=true in "addons-718366"
	I0930 10:32:12.642757  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.643213  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629329  576188 addons.go:69] Setting gcp-auth=true in profile "addons-718366"
	I0930 10:32:12.652125  576188 mustload.go:65] Loading cluster: addons-718366
	I0930 10:32:12.652324  576188 config.go:182] Loaded profile config "addons-718366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:32:12.652576  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.656063  576188 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-718366"
	I0930 10:32:12.656091  576188 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-718366"
	I0930 10:32:12.656420  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.685014  576188 addons.go:69] Setting volcano=true in profile "addons-718366"
	I0930 10:32:12.685050  576188 addons.go:234] Setting addon volcano=true in "addons-718366"
	I0930 10:32:12.685092  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.685608  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629332  576188 addons.go:69] Setting ingress=true in profile "addons-718366"
	I0930 10:32:12.687633  576188 addons.go:234] Setting addon ingress=true in "addons-718366"
	I0930 10:32:12.687681  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.688210  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.705646  576188 addons.go:69] Setting volumesnapshots=true in profile "addons-718366"
	I0930 10:32:12.705685  576188 addons.go:234] Setting addon volumesnapshots=true in "addons-718366"
	I0930 10:32:12.705724  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.706207  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629336  576188 addons.go:69] Setting ingress-dns=true in profile "addons-718366"
	I0930 10:32:12.708613  576188 addons.go:234] Setting addon ingress-dns=true in "addons-718366"
	I0930 10:32:12.708663  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.709150  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629418  576188 out.go:177] * Verifying Kubernetes components...
	I0930 10:32:12.729494  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.825496  576188 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0930 10:32:12.832477  576188 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0930 10:32:12.835208  576188 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 10:32:12.835233  576188 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 10:32:12.835325  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.835432  576188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:32:12.853660  576188 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0930 10:32:12.855707  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.857751  576188 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0930 10:32:12.857864  576188 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 10:32:12.859599  576188 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:32:12.872767  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0930 10:32:12.872887  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.865884  576188 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0930 10:32:12.875361  576188 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0930 10:32:12.875445  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.866737  576188 addons.go:234] Setting addon default-storageclass=true in "addons-718366"
	I0930 10:32:12.882918  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.883376  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.887937  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0930 10:32:12.887958  576188 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0930 10:32:12.888030  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.894765  576188 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:32:12.894794  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 10:32:12.894866  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.872690  576188 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0930 10:32:12.908423  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0930 10:32:12.908785  576188 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0930 10:32:12.908833  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0930 10:32:12.908950  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.940598  576188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0930 10:32:12.941002  576188 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0930 10:32:12.946206  576188 out.go:177]   - Using image docker.io/registry:2.8.3
	I0930 10:32:12.948814  576188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:32:12.949045  576188 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0930 10:32:12.949077  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0930 10:32:12.949171  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.958006  576188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:32:12.959188  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0930 10:32:12.961743  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W0930 10:32:12.962757  576188 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0930 10:32:12.973838  576188 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:32:12.973872  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0930 10:32:12.973943  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.979512  576188 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0930 10:32:12.979700  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0930 10:32:12.985709  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0930 10:32:12.985933  576188 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:32:12.985946  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0930 10:32:12.986012  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.996257  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0930 10:32:12.996526  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.001310  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0930 10:32:13.001479  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0930 10:32:13.001508  576188 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0930 10:32:13.001634  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:13.009342  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 10:32:13.017322  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0930 10:32:13.020721  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0930 10:32:13.021813  576188 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-718366"
	I0930 10:32:13.021852  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:13.022269  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:13.032608  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0930 10:32:13.032637  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0930 10:32:13.032715  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:13.058753  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.086640  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.090634  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.123015  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.154530  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.177875  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.178807  576188 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 10:32:13.178823  576188 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 10:32:13.178880  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:13.185183  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.204407  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.206891  576188 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0930 10:32:13.209370  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.213841  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.224068  576188 out.go:177]   - Using image docker.io/busybox:stable
	I0930 10:32:13.227725  576188 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:32:13.227749  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0930 10:32:13.227816  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:13.235510  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.260318  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	W0930 10:32:13.273338  576188 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0930 10:32:13.273406  576188 retry.go:31] will retry after 227.69102ms: ssh: handshake failed: EOF
	I0930 10:32:13.394925  576188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:32:13.486745  576188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 10:32:13.486818  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0930 10:32:13.623628  576188 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0930 10:32:13.623711  576188 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0930 10:32:13.630043  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:32:13.635130  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:32:13.638091  576188 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0930 10:32:13.638162  576188 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0930 10:32:13.659361  576188 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0930 10:32:13.659438  576188 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0930 10:32:13.671231  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:32:13.673254  576188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 10:32:13.673314  576188 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 10:32:13.699306  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0930 10:32:13.699326  576188 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0930 10:32:13.702344  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:32:13.749760  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0930 10:32:13.749837  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0930 10:32:13.762014  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0930 10:32:13.776095  576188 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0930 10:32:13.776167  576188 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0930 10:32:13.783348  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 10:32:13.795504  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0930 10:32:13.795584  576188 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0930 10:32:13.809799  576188 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0930 10:32:13.809876  576188 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0930 10:32:13.867266  576188 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:32:13.867337  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0930 10:32:13.895970  576188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:32:13.896050  576188 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 10:32:13.927958  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0930 10:32:13.928037  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0930 10:32:13.932218  576188 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0930 10:32:13.932292  576188 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0930 10:32:13.950651  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:32:13.969239  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0930 10:32:13.969315  576188 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0930 10:32:13.972998  576188 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0930 10:32:13.973069  576188 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0930 10:32:14.064724  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:32:14.068228  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0930 10:32:14.068306  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0930 10:32:14.084109  576188 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0930 10:32:14.084189  576188 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0930 10:32:14.101672  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:32:14.118680  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:32:14.118751  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0930 10:32:14.128305  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0930 10:32:14.128380  576188 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0930 10:32:14.228099  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0930 10:32:14.228175  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0930 10:32:14.260067  576188 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0930 10:32:14.260147  576188 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0930 10:32:14.267263  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:32:14.286038  576188 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:32:14.286113  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0930 10:32:14.406085  576188 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0930 10:32:14.406166  576188 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0930 10:32:14.409527  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0930 10:32:14.409623  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0930 10:32:14.443415  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:32:14.478742  576188 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0930 10:32:14.478821  576188 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0930 10:32:14.482790  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0930 10:32:14.482880  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0930 10:32:14.522876  576188 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:32:14.522950  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0930 10:32:14.538265  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0930 10:32:14.538348  576188 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0930 10:32:14.599317  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:32:14.621923  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0930 10:32:14.621995  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0930 10:32:14.718338  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0930 10:32:14.718419  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0930 10:32:14.771911  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:32:14.771992  576188 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0930 10:32:14.830453  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:32:16.302802  576188 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.293410506s)
	I0930 10:32:16.302886  576188 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0930 10:32:16.303052  576188 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.908050917s)
	I0930 10:32:16.303221  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.673105181s)
	I0930 10:32:16.304869  576188 node_ready.go:35] waiting up to 6m0s for node "addons-718366" to be "Ready" ...
	I0930 10:32:16.969956  576188 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-718366" context rescaled to 1 replicas
	I0930 10:32:17.813534  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.178328626s)
	I0930 10:32:17.813663  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.142359566s)
	I0930 10:32:18.331726  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:18.989036  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.2269377s)
	I0930 10:32:18.989135  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.205718923s)
	I0930 10:32:18.989300  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.038581281s)
	I0930 10:32:18.989533  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.924735717s)
	I0930 10:32:18.990072  576188 addons.go:475] Verifying addon registry=true in "addons-718366"
	I0930 10:32:18.989162  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.2865793s)
	I0930 10:32:18.989730  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.887983209s)
	I0930 10:32:18.990430  576188 addons.go:475] Verifying addon metrics-server=true in "addons-718366"
	I0930 10:32:18.989761  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.722429624s)
	I0930 10:32:18.990693  576188 addons.go:475] Verifying addon ingress=true in "addons-718366"
	I0930 10:32:18.989832  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.546341657s)
	W0930 10:32:18.991429  576188 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:32:18.991452  576188 retry.go:31] will retry after 214.891484ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:32:18.989886  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.390493939s)
	I0930 10:32:18.993976  576188 out.go:177] * Verifying ingress addon...
	I0930 10:32:18.993993  576188 out.go:177] * Verifying registry addon...
	I0930 10:32:18.994136  576188 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-718366 service yakd-dashboard -n yakd-dashboard
	
	I0930 10:32:18.998130  576188 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0930 10:32:19.000026  576188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0930 10:32:19.012749  576188 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 10:32:19.012827  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0930 10:32:19.013748  576188 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0930 10:32:19.015873  576188 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0930 10:32:19.015899  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:19.206505  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:32:19.222406  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.391854023s)
	I0930 10:32:19.222443  576188 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-718366"
	I0930 10:32:19.225269  576188 out.go:177] * Verifying csi-hostpath-driver addon...
	I0930 10:32:19.228851  576188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0930 10:32:19.265510  576188 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 10:32:19.265536  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:19.502396  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:19.510520  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:19.733138  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:20.002773  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:20.004965  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:20.233838  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:20.503847  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:20.505878  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:20.735188  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:20.808517  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:21.005465  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:21.006508  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:21.232962  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:21.508544  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:21.510168  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:21.746490  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:21.919471  576188 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0930 10:32:21.919583  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:21.945306  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:22.005204  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:22.020654  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:22.107096  576188 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0930 10:32:22.156422  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.949861964s)
	I0930 10:32:22.161917  576188 addons.go:234] Setting addon gcp-auth=true in "addons-718366"
	I0930 10:32:22.161972  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:22.162436  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:22.180503  576188 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0930 10:32:22.180562  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:22.199581  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:22.234471  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:22.293532  576188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:32:22.295855  576188 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0930 10:32:22.298481  576188 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0930 10:32:22.298507  576188 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0930 10:32:22.327120  576188 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0930 10:32:22.327146  576188 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0930 10:32:22.354965  576188 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:32:22.354989  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0930 10:32:22.374415  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:32:22.505237  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:22.505593  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:22.733404  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:22.810784  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:22.982766  576188 addons.go:475] Verifying addon gcp-auth=true in "addons-718366"
	I0930 10:32:22.985946  576188 out.go:177] * Verifying gcp-auth addon...
	I0930 10:32:22.989503  576188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0930 10:32:22.997921  576188 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 10:32:22.997948  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:23.007118  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:23.013282  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:23.232430  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:23.492864  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:23.502671  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:23.504311  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:23.732922  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:23.993049  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:24.002595  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:24.005381  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:24.232995  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:24.492978  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:24.502914  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:24.503966  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:24.733190  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:24.993358  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:25.002805  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:25.003600  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:25.232476  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:25.308811  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:25.492564  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:25.502308  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:25.504363  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:25.732965  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:25.993474  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:26.003592  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:26.005468  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:26.232578  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:26.493164  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:26.502818  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:26.504372  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:26.732670  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:26.993385  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:27.004214  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:27.004360  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:27.232999  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:27.493904  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:27.502518  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:27.504500  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:27.732700  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:27.809256  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:27.993469  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:28.002259  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:28.005142  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:28.232398  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:28.493035  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:28.502278  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:28.503849  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:28.732758  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:28.992992  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:29.003509  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:29.004188  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:29.232281  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:29.492609  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:29.501741  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:29.504027  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:29.732607  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:29.993719  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:30.005781  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:30.006305  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:30.232478  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:30.308805  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:30.493327  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:30.502458  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:30.504010  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:30.732161  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:30.993225  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:31.002921  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:31.004619  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:31.232186  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:31.492616  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:31.501951  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:31.503335  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:31.732881  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:31.993602  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:32.003681  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:32.004106  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:32.232590  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:32.308898  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:32.492382  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:32.502524  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:32.503242  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:32.732493  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:32.993359  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:33.003345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:33.004523  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:33.232210  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:33.492895  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:33.502809  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:33.503380  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:33.732345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:33.992694  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:34.002487  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:34.005419  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:34.232668  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:34.493362  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:34.502120  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:34.503290  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:34.732832  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:34.808872  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:34.993165  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:35.002532  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:35.003792  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:35.232243  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:35.492644  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:35.502151  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:35.504388  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:35.732397  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:35.993350  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:36.004449  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:36.006027  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:36.233129  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:36.493897  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:36.503054  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:36.503156  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:36.732619  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:36.993186  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:37.003617  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:37.004328  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:37.232382  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:37.309099  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:37.492995  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:37.502362  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:37.503981  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:37.732628  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:37.992500  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:38.006378  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:38.009415  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:38.232948  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:38.493574  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:38.501907  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:38.503340  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:38.732877  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:38.993074  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:39.002160  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:39.004134  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:39.232913  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:39.492334  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:39.502072  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:39.504609  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:39.733100  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:39.808384  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:39.992997  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:40.002119  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:40.012472  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:40.232629  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:40.492673  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:40.501888  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:40.503434  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:40.732929  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:40.992943  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:41.003060  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:41.004287  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:41.232552  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:41.493144  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:41.501724  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:41.504150  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:41.732666  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:41.808700  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:41.992905  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:42.002375  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:42.004751  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:42.232856  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:42.494375  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:42.502604  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:42.503446  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:42.732867  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:42.993326  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:43.002100  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:43.004307  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:43.232852  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:43.493140  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:43.501743  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:43.503151  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:43.733043  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:43.993474  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:44.003911  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:44.004199  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:44.232444  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:44.308664  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:44.492846  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:44.502736  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:44.503109  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:44.732682  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:44.992688  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:45.002473  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:45.006372  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:45.233808  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:45.493054  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:45.502649  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:45.504224  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:45.732634  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:45.992992  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:46.003067  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:46.005020  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:46.232318  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:46.308743  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:46.493311  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:46.501833  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:46.504311  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:46.732337  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:46.993446  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:47.002979  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:47.004213  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:47.231826  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:47.493043  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:47.502555  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:47.504579  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:47.733091  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:47.992702  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:48.006318  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:48.006591  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:48.232843  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:48.309156  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:48.492630  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:48.502793  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:48.505041  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:48.732633  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:48.993020  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:49.002803  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:49.005073  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:49.232599  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:49.493358  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:49.502132  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:49.504685  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:49.732732  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:49.993101  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:50.008747  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:50.011007  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:50.232033  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:50.492811  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:50.502024  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:50.503194  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:50.732565  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:50.808123  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:50.992880  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:51.002470  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:51.004489  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:51.232566  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:51.493283  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:51.503096  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:51.504579  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:51.732498  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:51.997038  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:52.003743  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:52.004664  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:52.232961  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:52.493233  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:52.502560  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:52.504146  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:52.732196  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:52.809117  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:52.993352  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:53.002467  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:53.005258  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:53.232118  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:53.492883  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:53.503298  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:53.503937  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:53.732561  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:53.992888  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:54.003179  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:54.003621  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:54.232000  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:54.493201  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:54.502407  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:54.504047  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:54.732754  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:54.809165  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:54.993439  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:55.003745  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:55.006573  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:55.232921  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:55.532690  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:55.536564  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:55.537614  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:55.747717  576188 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 10:32:55.747798  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:55.813584  576188 node_ready.go:49] node "addons-718366" has status "Ready":"True"
	I0930 10:32:55.813696  576188 node_ready.go:38] duration metric: took 39.508639259s for node "addons-718366" to be "Ready" ...
	I0930 10:32:55.813729  576188 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:32:55.842207  576188 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dtmzl" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:56.024341  576188 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 10:32:56.024415  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:56.026608  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:56.027649  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:56.238249  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:56.510908  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:56.599871  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:56.601369  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:56.734813  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:56.993968  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:57.004113  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:57.004475  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:57.234269  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:57.349188  576188 pod_ready.go:93] pod "coredns-7c65d6cfc9-dtmzl" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.349213  576188 pod_ready.go:82] duration metric: took 1.506927684s for pod "coredns-7c65d6cfc9-dtmzl" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.349264  576188 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.354868  576188 pod_ready.go:93] pod "etcd-addons-718366" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.354894  576188 pod_ready.go:82] duration metric: took 5.614429ms for pod "etcd-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.354911  576188 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.360427  576188 pod_ready.go:93] pod "kube-apiserver-addons-718366" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.360453  576188 pod_ready.go:82] duration metric: took 5.533545ms for pod "kube-apiserver-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.360465  576188 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.366443  576188 pod_ready.go:93] pod "kube-controller-manager-addons-718366" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.366468  576188 pod_ready.go:82] duration metric: took 5.995876ms for pod "kube-controller-manager-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.366481  576188 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6d7ts" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.409203  576188 pod_ready.go:93] pod "kube-proxy-6d7ts" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.409232  576188 pod_ready.go:82] duration metric: took 42.742719ms for pod "kube-proxy-6d7ts" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.409245  576188 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.494502  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:57.504034  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:57.504588  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:57.741490  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:57.809369  576188 pod_ready.go:93] pod "kube-scheduler-addons-718366" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.809395  576188 pod_ready.go:82] duration metric: took 400.142122ms for pod "kube-scheduler-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.809406  576188 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.992791  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:58.002813  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:58.005194  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:58.235034  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:58.493263  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:58.505193  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:58.507236  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:58.735275  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:58.993601  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:59.003135  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:59.005872  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:59.234232  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:59.493712  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:59.505146  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:59.506583  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:59.734233  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:59.817196  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:32:59.996524  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:00.018042  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:00.019456  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:00.235319  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:00.493875  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:00.513018  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:00.515874  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:00.735209  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:00.993692  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:01.009352  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:01.011139  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:01.234558  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:01.493345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:01.502755  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:01.504885  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:01.734041  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:01.823332  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:01.993286  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:02.003595  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:02.005208  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:02.234246  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:02.494833  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:02.506503  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:02.507965  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:02.733979  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:02.994512  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:03.006008  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:03.008882  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:03.235987  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:03.502069  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:03.504611  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:03.508145  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:03.734477  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:03.993075  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:04.002465  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:04.005969  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:04.237150  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:04.318563  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:04.493450  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:04.503535  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:04.505295  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:04.735410  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:04.993251  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:05.004507  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:05.005793  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:05.233147  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:05.493785  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:05.503110  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:05.504756  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:05.734929  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:05.993818  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:06.005361  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:06.008120  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:06.234165  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:06.494029  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:06.506345  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:06.507733  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:06.736131  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:06.820180  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:06.997221  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:07.003917  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:07.012186  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:07.235277  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:07.494419  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:07.503987  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:07.506651  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:07.735614  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:07.993601  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:08.007216  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:08.008949  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:08.235108  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:08.492758  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:08.506875  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:08.509276  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:08.734821  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:08.996343  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:09.003494  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:09.018021  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:09.233920  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:09.322744  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:09.495622  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:09.503188  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:09.505370  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:09.733302  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:09.993442  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:10.007158  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:10.014910  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:10.236566  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:10.493122  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:10.506277  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:10.508170  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:10.734819  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:11.003392  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:11.017958  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:11.024396  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:11.241113  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:11.493717  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:11.503395  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:11.505398  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:11.734258  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:11.818701  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:11.993638  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:12.004028  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:12.005119  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:12.234546  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:12.493816  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:12.502382  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:12.504357  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:12.735120  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:12.993827  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:13.003086  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:13.005511  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:13.240764  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:13.493012  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:13.502733  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:13.504695  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:13.739103  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:13.992794  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:14.002410  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:14.004962  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:14.234182  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:14.315747  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:14.493894  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:14.502951  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:14.504325  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:14.735374  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:14.995201  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:15.008392  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:15.009511  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:15.239287  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:15.497798  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:15.505845  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:15.506265  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:15.733914  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:15.994121  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:16.002064  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:16.005323  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:16.235840  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:16.317348  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:16.493717  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:16.502559  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:16.504743  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:16.733456  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:16.993232  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:17.004117  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:17.005715  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:17.233977  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:17.493225  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:17.508853  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:17.509324  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:17.733379  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:17.993128  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:18.002969  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:18.004753  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:18.235053  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:18.318055  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:18.494182  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:18.514063  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:18.515256  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:18.741787  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:18.993437  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:19.006106  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:19.006941  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:19.238835  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:19.493578  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:19.503346  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:19.507520  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:19.735461  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:19.993675  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:20.007386  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:20.009120  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:20.234329  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:20.494059  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:20.503676  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:20.508870  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:20.734675  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:20.819054  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:20.994644  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:21.005532  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:21.006881  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:21.233747  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:21.493683  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:21.502510  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:21.505435  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:21.733595  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:21.993151  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:22.004128  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:22.007124  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:22.234355  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:22.494138  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:22.522806  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:22.523017  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:22.733192  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:22.993544  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:23.003301  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:23.005614  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:23.234009  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:23.316009  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:23.493223  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:23.502465  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:23.504091  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:23.734075  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:23.993191  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:24.005564  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:24.006266  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:24.237087  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:24.494192  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:24.509932  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:24.511585  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:24.736086  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:24.993584  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:25.002534  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:25.004467  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:25.238048  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:25.316968  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:25.493170  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:25.502257  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:25.504345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:25.735840  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:25.993750  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:26.014041  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:26.018512  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:26.234506  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:26.499206  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:26.522015  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:26.531645  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:26.734077  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:26.995142  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:27.002623  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:27.005131  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:27.234277  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:27.509630  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:27.517834  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:27.519610  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:27.734102  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:27.815917  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:27.993225  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:28.004799  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:28.007787  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:28.233431  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:28.495964  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:28.505908  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:28.507029  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:28.743601  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:28.994222  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:29.005072  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:29.005919  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:29.234475  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:29.493121  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:29.503087  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:29.505224  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:29.733867  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:29.818825  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:29.993832  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:30.003223  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:30.009270  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:30.234345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:30.493573  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:30.503172  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:30.506658  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:30.734108  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:30.997885  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:31.003703  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:31.006228  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:31.234690  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:31.492946  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:31.504905  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:31.505338  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:31.734023  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:31.993444  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:32.005887  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:32.016752  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:32.234205  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:32.316627  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:32.493299  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:32.504102  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:32.512754  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:32.735003  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:32.994944  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:33.006628  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:33.007729  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:33.234441  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:33.493806  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:33.505141  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:33.507304  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:33.738773  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:33.993624  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:34.013205  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:34.017042  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:34.233853  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:34.316867  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:34.492641  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:34.502886  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:34.503705  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:34.734286  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:34.993856  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:35.002176  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:35.004584  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:35.233492  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:35.493057  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:35.502018  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:35.503973  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:35.734314  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:35.993679  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:36.002264  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:36.008072  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:36.233857  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:36.492965  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:36.502535  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:36.504461  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:36.735017  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:36.816831  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:36.996016  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:37.008288  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:37.015405  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:37.234294  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:37.497062  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:37.504363  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:37.504553  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:37.735672  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:37.992884  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:38.005378  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:38.007796  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:38.237325  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:38.493907  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:38.505124  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:38.505765  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:38.734820  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:38.818257  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:38.994462  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:39.004598  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:39.014134  576188 kapi.go:107] duration metric: took 1m20.014106342s to wait for kubernetes.io/minikube-addons=registry ...
	I0930 10:33:39.235130  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:39.494071  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:39.503484  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:39.734794  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:39.999698  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:40.010425  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:40.242604  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:40.499596  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:40.503174  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:40.735274  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:40.993423  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:41.003329  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:41.236791  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:41.316472  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:41.494610  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:41.503610  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:41.734043  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:41.994292  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:42.002568  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:42.235021  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:42.493143  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:42.502820  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:42.733736  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:42.993069  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:43.003100  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:43.234277  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:43.317480  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:43.493236  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:43.502436  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:43.734921  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:43.992811  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:44.003086  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:44.233865  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:44.493110  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:44.502615  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:44.733541  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:44.993633  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:45.003852  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:45.234843  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:45.493514  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:45.502782  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:45.733458  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:45.817273  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:45.993706  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:46.016026  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:46.233913  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:46.498463  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:46.502757  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:46.734490  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:46.993029  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:47.004462  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:47.235637  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:47.503521  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:47.504652  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:47.741358  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:47.993918  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:48.006378  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:48.234693  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:48.315817  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:48.493248  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:48.502422  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:48.740592  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:48.993401  576188 kapi.go:107] duration metric: took 1m26.003896883s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0930 10:33:48.996461  576188 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-718366 cluster.
	I0930 10:33:48.999075  576188 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0930 10:33:49.002456  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:49.005169  576188 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0930 10:33:49.235396  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:49.503984  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:49.734511  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:50.004782  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:50.235070  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:50.323313  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:50.503830  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:50.734604  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:51.003831  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:51.234289  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:51.503943  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:51.733769  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:52.002609  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:52.234340  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:52.507200  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:52.734763  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:52.818591  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:53.004428  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:53.235787  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:53.502862  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:53.734437  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:54.007069  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:54.235077  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:54.503292  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:54.735359  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:55.002193  576188 kapi.go:107] duration metric: took 1m36.004059929s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0930 10:33:55.234033  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:55.317516  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:55.734069  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:56.234143  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:56.734127  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:57.233654  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:57.738983  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:57.816482  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:58.234471  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:58.734677  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:59.238020  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:59.734710  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:59.817182  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:00.236578  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:00.734525  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:01.234627  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:01.734546  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:01.825323  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:02.233540  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:02.738772  576188 kapi.go:107] duration metric: took 1m43.50991885s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0930 10:34:02.744119  576188 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0930 10:34:02.746979  576188 addons.go:510] duration metric: took 1m50.129091289s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0930 10:34:04.316300  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:06.815648  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:09.315052  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:11.315816  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:13.316065  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:15.316190  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:16.315831  576188 pod_ready.go:93] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"True"
	I0930 10:34:16.315861  576188 pod_ready.go:82] duration metric: took 1m18.506446968s for pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace to be "Ready" ...
	I0930 10:34:16.315874  576188 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4vhfz" in "kube-system" namespace to be "Ready" ...
	I0930 10:34:16.321502  576188 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4vhfz" in "kube-system" namespace has status "Ready":"True"
	I0930 10:34:16.321532  576188 pod_ready.go:82] duration metric: took 5.649022ms for pod "nvidia-device-plugin-daemonset-4vhfz" in "kube-system" namespace to be "Ready" ...
	I0930 10:34:16.321583  576188 pod_ready.go:39] duration metric: took 1m20.507828006s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:34:16.321605  576188 api_server.go:52] waiting for apiserver process to appear ...
	I0930 10:34:16.321638  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 10:34:16.321706  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 10:34:16.386809  576188 cri.go:89] found id: "162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:16.386886  576188 cri.go:89] found id: ""
	I0930 10:34:16.386900  576188 logs.go:276] 1 containers: [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b]
	I0930 10:34:16.386984  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.391025  576188 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 10:34:16.391106  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 10:34:16.435062  576188 cri.go:89] found id: "c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:16.435085  576188 cri.go:89] found id: ""
	I0930 10:34:16.435094  576188 logs.go:276] 1 containers: [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70]
	I0930 10:34:16.435153  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.438701  576188 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 10:34:16.438773  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 10:34:16.478714  576188 cri.go:89] found id: "8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:16.478737  576188 cri.go:89] found id: ""
	I0930 10:34:16.478746  576188 logs.go:276] 1 containers: [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e]
	I0930 10:34:16.478802  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.482397  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 10:34:16.482471  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 10:34:16.537909  576188 cri.go:89] found id: "f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:16.537932  576188 cri.go:89] found id: ""
	I0930 10:34:16.537940  576188 logs.go:276] 1 containers: [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf]
	I0930 10:34:16.538010  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.541631  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 10:34:16.541707  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 10:34:16.584294  576188 cri.go:89] found id: "d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:16.584316  576188 cri.go:89] found id: ""
	I0930 10:34:16.584324  576188 logs.go:276] 1 containers: [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6]
	I0930 10:34:16.584387  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.588121  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 10:34:16.588197  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 10:34:16.627920  576188 cri.go:89] found id: "8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:16.627943  576188 cri.go:89] found id: ""
	I0930 10:34:16.627951  576188 logs.go:276] 1 containers: [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce]
	I0930 10:34:16.628010  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.631831  576188 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 10:34:16.631910  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 10:34:16.670917  576188 cri.go:89] found id: "97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:16.670987  576188 cri.go:89] found id: ""
	I0930 10:34:16.671002  576188 logs.go:276] 1 containers: [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c]
	I0930 10:34:16.671067  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.674818  576188 logs.go:123] Gathering logs for dmesg ...
	I0930 10:34:16.674843  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 10:34:16.691258  576188 logs.go:123] Gathering logs for etcd [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70] ...
	I0930 10:34:16.691286  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:16.781066  576188 logs.go:123] Gathering logs for kube-proxy [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6] ...
	I0930 10:34:16.781106  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:16.824438  576188 logs.go:123] Gathering logs for kindnet [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c] ...
	I0930 10:34:16.824473  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:16.883060  576188 logs.go:123] Gathering logs for CRI-O ...
	I0930 10:34:16.883091  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 10:34:16.989887  576188 logs.go:123] Gathering logs for kubelet ...
	I0930 10:34:16.989925  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0930 10:34:17.064721  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541514    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.064968  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541583    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.065190  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541651    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.065432  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.065664  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.065898  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.067781  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.067995  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:17.104140  576188 logs.go:123] Gathering logs for describe nodes ...
	I0930 10:34:17.104180  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 10:34:17.291559  576188 logs.go:123] Gathering logs for kube-apiserver [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b] ...
	I0930 10:34:17.291591  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:17.344411  576188 logs.go:123] Gathering logs for coredns [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e] ...
	I0930 10:34:17.344446  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:17.394328  576188 logs.go:123] Gathering logs for kube-scheduler [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf] ...
	I0930 10:34:17.394358  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:17.437492  576188 logs.go:123] Gathering logs for kube-controller-manager [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce] ...
	I0930 10:34:17.437522  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:17.506642  576188 logs.go:123] Gathering logs for container status ...
	I0930 10:34:17.506679  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 10:34:17.557358  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:17.557386  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0930 10:34:17.557577  576188 out.go:270] X Problems detected in kubelet:
	W0930 10:34:17.557600  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.557623  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.557643  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.557652  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.557663  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:17.557670  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:17.557678  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:34:27.559396  576188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:34:27.573481  576188 api_server.go:72] duration metric: took 2m14.955998532s to wait for apiserver process to appear ...
	I0930 10:34:27.573512  576188 api_server.go:88] waiting for apiserver healthz status ...
	I0930 10:34:27.573570  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 10:34:27.573627  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 10:34:27.612157  576188 cri.go:89] found id: "162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:27.612193  576188 cri.go:89] found id: ""
	I0930 10:34:27.612201  576188 logs.go:276] 1 containers: [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b]
	I0930 10:34:27.612290  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.615922  576188 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 10:34:27.615995  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 10:34:27.657373  576188 cri.go:89] found id: "c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:27.657395  576188 cri.go:89] found id: ""
	I0930 10:34:27.657413  576188 logs.go:276] 1 containers: [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70]
	I0930 10:34:27.657473  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.661114  576188 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 10:34:27.661186  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 10:34:27.699276  576188 cri.go:89] found id: "8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:27.699300  576188 cri.go:89] found id: ""
	I0930 10:34:27.699309  576188 logs.go:276] 1 containers: [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e]
	I0930 10:34:27.699385  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.703275  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 10:34:27.703356  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 10:34:27.743333  576188 cri.go:89] found id: "f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:27.743353  576188 cri.go:89] found id: ""
	I0930 10:34:27.743361  576188 logs.go:276] 1 containers: [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf]
	I0930 10:34:27.743432  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.746997  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 10:34:27.747079  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 10:34:27.787583  576188 cri.go:89] found id: "d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:27.787605  576188 cri.go:89] found id: ""
	I0930 10:34:27.787613  576188 logs.go:276] 1 containers: [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6]
	I0930 10:34:27.787691  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.791098  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 10:34:27.791173  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 10:34:27.850541  576188 cri.go:89] found id: "8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:27.850563  576188 cri.go:89] found id: ""
	I0930 10:34:27.850575  576188 logs.go:276] 1 containers: [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce]
	I0930 10:34:27.850631  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.854249  576188 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 10:34:27.854319  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 10:34:27.893234  576188 cri.go:89] found id: "97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:27.893260  576188 cri.go:89] found id: ""
	I0930 10:34:27.893268  576188 logs.go:276] 1 containers: [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c]
	I0930 10:34:27.893322  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.897133  576188 logs.go:123] Gathering logs for container status ...
	I0930 10:34:27.897160  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 10:34:27.951284  576188 logs.go:123] Gathering logs for coredns [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e] ...
	I0930 10:34:27.951319  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:28.003152  576188 logs.go:123] Gathering logs for kube-scheduler [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf] ...
	I0930 10:34:28.003184  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:28.043478  576188 logs.go:123] Gathering logs for kube-controller-manager [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce] ...
	I0930 10:34:28.043557  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:28.115108  576188 logs.go:123] Gathering logs for kindnet [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c] ...
	I0930 10:34:28.115147  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:28.159435  576188 logs.go:123] Gathering logs for CRI-O ...
	I0930 10:34:28.159461  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 10:34:28.258636  576188 logs.go:123] Gathering logs for kube-proxy [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6] ...
	I0930 10:34:28.258677  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:28.302989  576188 logs.go:123] Gathering logs for kubelet ...
	I0930 10:34:28.303015  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0930 10:34:28.370971  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541514    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.371245  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541583    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.371445  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541651    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.371681  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.371871  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.372100  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.373981  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.374197  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:28.410471  576188 logs.go:123] Gathering logs for dmesg ...
	I0930 10:34:28.410499  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 10:34:28.427272  576188 logs.go:123] Gathering logs for describe nodes ...
	I0930 10:34:28.427345  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 10:34:28.564680  576188 logs.go:123] Gathering logs for kube-apiserver [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b] ...
	I0930 10:34:28.564708  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:28.622261  576188 logs.go:123] Gathering logs for etcd [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70] ...
	I0930 10:34:28.622295  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:28.714780  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:28.714813  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0930 10:34:28.714867  576188 out.go:270] X Problems detected in kubelet:
	W0930 10:34:28.714881  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.714889  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.714916  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.714924  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.714934  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:28.714940  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:28.714947  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:34:38.716957  576188 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0930 10:34:38.725719  576188 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0930 10:34:38.726755  576188 api_server.go:141] control plane version: v1.31.1
	I0930 10:34:38.726784  576188 api_server.go:131] duration metric: took 11.153263628s to wait for apiserver health ...
	I0930 10:34:38.726809  576188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 10:34:38.726837  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 10:34:38.726904  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 10:34:38.773675  576188 cri.go:89] found id: "162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:38.773695  576188 cri.go:89] found id: ""
	I0930 10:34:38.773703  576188 logs.go:276] 1 containers: [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b]
	I0930 10:34:38.773769  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.777305  576188 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 10:34:38.777389  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 10:34:38.819225  576188 cri.go:89] found id: "c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:38.819245  576188 cri.go:89] found id: ""
	I0930 10:34:38.819254  576188 logs.go:276] 1 containers: [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70]
	I0930 10:34:38.819313  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.823902  576188 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 10:34:38.823980  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 10:34:38.865257  576188 cri.go:89] found id: "8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:38.865278  576188 cri.go:89] found id: ""
	I0930 10:34:38.865301  576188 logs.go:276] 1 containers: [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e]
	I0930 10:34:38.865358  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.869041  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 10:34:38.869123  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 10:34:38.909299  576188 cri.go:89] found id: "f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:38.909323  576188 cri.go:89] found id: ""
	I0930 10:34:38.909331  576188 logs.go:276] 1 containers: [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf]
	I0930 10:34:38.909388  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.912958  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 10:34:38.913039  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 10:34:38.951466  576188 cri.go:89] found id: "d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:38.951489  576188 cri.go:89] found id: ""
	I0930 10:34:38.951497  576188 logs.go:276] 1 containers: [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6]
	I0930 10:34:38.951555  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.955148  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 10:34:38.955250  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 10:34:38.999433  576188 cri.go:89] found id: "8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:38.999506  576188 cri.go:89] found id: ""
	I0930 10:34:38.999523  576188 logs.go:276] 1 containers: [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce]
	I0930 10:34:38.999588  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:39.003640  576188 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 10:34:39.003758  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 10:34:39.042975  576188 cri.go:89] found id: "97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:39.043045  576188 cri.go:89] found id: ""
	I0930 10:34:39.043060  576188 logs.go:276] 1 containers: [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c]
	I0930 10:34:39.043118  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:39.046722  576188 logs.go:123] Gathering logs for kube-controller-manager [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce] ...
	I0930 10:34:39.046747  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:39.115864  576188 logs.go:123] Gathering logs for kubelet ...
	I0930 10:34:39.115902  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0930 10:34:39.186356  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541514    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.186605  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541583    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.186799  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541651    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.187028  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.187213  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.187443  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.189229  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.189452  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:39.226877  576188 logs.go:123] Gathering logs for dmesg ...
	I0930 10:34:39.226918  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 10:34:39.244214  576188 logs.go:123] Gathering logs for etcd [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70] ...
	I0930 10:34:39.244244  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:39.303635  576188 logs.go:123] Gathering logs for kube-scheduler [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf] ...
	I0930 10:34:39.303672  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:39.346611  576188 logs.go:123] Gathering logs for kube-proxy [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6] ...
	I0930 10:34:39.346643  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:39.385388  576188 logs.go:123] Gathering logs for kindnet [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c] ...
	I0930 10:34:39.385425  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:39.449017  576188 logs.go:123] Gathering logs for CRI-O ...
	I0930 10:34:39.449056  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 10:34:39.546447  576188 logs.go:123] Gathering logs for container status ...
	I0930 10:34:39.546489  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 10:34:39.596349  576188 logs.go:123] Gathering logs for describe nodes ...
	I0930 10:34:39.596379  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 10:34:39.729086  576188 logs.go:123] Gathering logs for kube-apiserver [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b] ...
	I0930 10:34:39.729117  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:39.806140  576188 logs.go:123] Gathering logs for coredns [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e] ...
	I0930 10:34:39.806172  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:39.854236  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:39.854262  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0930 10:34:39.854352  576188 out.go:270] X Problems detected in kubelet:
	W0930 10:34:39.854377  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.854389  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.854401  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.854407  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.854414  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:39.854426  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:39.854432  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:34:49.868953  576188 system_pods.go:59] 18 kube-system pods found
	I0930 10:34:49.868992  576188 system_pods.go:61] "coredns-7c65d6cfc9-dtmzl" [7a2c2f43-a853-49df-b58f-e6f6141a2737] Running
	I0930 10:34:49.869001  576188 system_pods.go:61] "csi-hostpath-attacher-0" [66af66d7-7f8b-4650-a4b6-9b5162ab76a1] Running
	I0930 10:34:49.869006  576188 system_pods.go:61] "csi-hostpath-resizer-0" [2d3f5d2f-f058-4a6b-b1c3-f25d3d549257] Running
	I0930 10:34:49.869012  576188 system_pods.go:61] "csi-hostpathplugin-mzdc5" [1f8386ff-e365-4d77-85c4-4380cc952f88] Running
	I0930 10:34:49.869048  576188 system_pods.go:61] "etcd-addons-718366" [41eb7870-f127-4cfa-8bb3-b32081bec033] Running
	I0930 10:34:49.869053  576188 system_pods.go:61] "kindnet-cx2x5" [cc2b53ef-4eba-4f69-a5e3-d3b1b8aee067] Running
	I0930 10:34:49.869062  576188 system_pods.go:61] "kube-apiserver-addons-718366" [d591a564-dc70-47d3-9e30-ac55eb92f702] Running
	I0930 10:34:49.869066  576188 system_pods.go:61] "kube-controller-manager-addons-718366" [566dbcee-1187-41f2-aaf4-b462be8fedc8] Running
	I0930 10:34:49.869079  576188 system_pods.go:61] "kube-ingress-dns-minikube" [201cdd5a-777d-406a-a3c3-ae55dfa26b03] Running
	I0930 10:34:49.869083  576188 system_pods.go:61] "kube-proxy-6d7ts" [1c00ed0e-dc57-4a81-b778-b92a64f0e0c1] Running
	I0930 10:34:49.869087  576188 system_pods.go:61] "kube-scheduler-addons-718366" [2159256d-1219-4d6d-9ec4-10a229c89118] Running
	I0930 10:34:49.869092  576188 system_pods.go:61] "metrics-server-84c5f94fbc-jqf86" [37c7c588-691f-43b1-bc7e-d9d29b8c740e] Running
	I0930 10:34:49.869130  576188 system_pods.go:61] "nvidia-device-plugin-daemonset-4vhfz" [409875b6-caeb-49b0-a6a3-4adab5c26abf] Running
	I0930 10:34:49.869143  576188 system_pods.go:61] "registry-66c9cd494c-zx9j9" [a2779ea5-90ce-41c6-800a-4fd0e62455e1] Running
	I0930 10:34:49.869147  576188 system_pods.go:61] "registry-proxy-nxhd5" [78962db4-c230-431b-b141-405fd6389146] Running
	I0930 10:34:49.869151  576188 system_pods.go:61] "snapshot-controller-56fcc65765-fnzp5" [00072f66-80b8-45a4-b940-6db1fba0c14b] Running
	I0930 10:34:49.869156  576188 system_pods.go:61] "snapshot-controller-56fcc65765-rtd66" [e61b1df1-f9b8-4ed6-b8bb-30c16e9e1a30] Running
	I0930 10:34:49.869160  576188 system_pods.go:61] "storage-provisioner" [fcd0fbac-220e-4dd5-a1a6-3ecae26b1962] Running
	I0930 10:34:49.869169  576188 system_pods.go:74] duration metric: took 11.14235034s to wait for pod list to return data ...
	I0930 10:34:49.869180  576188 default_sa.go:34] waiting for default service account to be created ...
	I0930 10:34:49.872043  576188 default_sa.go:45] found service account: "default"
	I0930 10:34:49.872072  576188 default_sa.go:55] duration metric: took 2.885942ms for default service account to be created ...
	I0930 10:34:49.872082  576188 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 10:34:49.882723  576188 system_pods.go:86] 18 kube-system pods found
	I0930 10:34:49.882762  576188 system_pods.go:89] "coredns-7c65d6cfc9-dtmzl" [7a2c2f43-a853-49df-b58f-e6f6141a2737] Running
	I0930 10:34:49.882770  576188 system_pods.go:89] "csi-hostpath-attacher-0" [66af66d7-7f8b-4650-a4b6-9b5162ab76a1] Running
	I0930 10:34:49.882794  576188 system_pods.go:89] "csi-hostpath-resizer-0" [2d3f5d2f-f058-4a6b-b1c3-f25d3d549257] Running
	I0930 10:34:49.882803  576188 system_pods.go:89] "csi-hostpathplugin-mzdc5" [1f8386ff-e365-4d77-85c4-4380cc952f88] Running
	I0930 10:34:49.882815  576188 system_pods.go:89] "etcd-addons-718366" [41eb7870-f127-4cfa-8bb3-b32081bec033] Running
	I0930 10:34:49.882820  576188 system_pods.go:89] "kindnet-cx2x5" [cc2b53ef-4eba-4f69-a5e3-d3b1b8aee067] Running
	I0930 10:34:49.882825  576188 system_pods.go:89] "kube-apiserver-addons-718366" [d591a564-dc70-47d3-9e30-ac55eb92f702] Running
	I0930 10:34:49.882835  576188 system_pods.go:89] "kube-controller-manager-addons-718366" [566dbcee-1187-41f2-aaf4-b462be8fedc8] Running
	I0930 10:34:49.882840  576188 system_pods.go:89] "kube-ingress-dns-minikube" [201cdd5a-777d-406a-a3c3-ae55dfa26b03] Running
	I0930 10:34:49.882845  576188 system_pods.go:89] "kube-proxy-6d7ts" [1c00ed0e-dc57-4a81-b778-b92a64f0e0c1] Running
	I0930 10:34:49.882855  576188 system_pods.go:89] "kube-scheduler-addons-718366" [2159256d-1219-4d6d-9ec4-10a229c89118] Running
	I0930 10:34:49.882859  576188 system_pods.go:89] "metrics-server-84c5f94fbc-jqf86" [37c7c588-691f-43b1-bc7e-d9d29b8c740e] Running
	I0930 10:34:49.882882  576188 system_pods.go:89] "nvidia-device-plugin-daemonset-4vhfz" [409875b6-caeb-49b0-a6a3-4adab5c26abf] Running
	I0930 10:34:49.882887  576188 system_pods.go:89] "registry-66c9cd494c-zx9j9" [a2779ea5-90ce-41c6-800a-4fd0e62455e1] Running
	I0930 10:34:49.882891  576188 system_pods.go:89] "registry-proxy-nxhd5" [78962db4-c230-431b-b141-405fd6389146] Running
	I0930 10:34:49.882913  576188 system_pods.go:89] "snapshot-controller-56fcc65765-fnzp5" [00072f66-80b8-45a4-b940-6db1fba0c14b] Running
	I0930 10:34:49.882918  576188 system_pods.go:89] "snapshot-controller-56fcc65765-rtd66" [e61b1df1-f9b8-4ed6-b8bb-30c16e9e1a30] Running
	I0930 10:34:49.882927  576188 system_pods.go:89] "storage-provisioner" [fcd0fbac-220e-4dd5-a1a6-3ecae26b1962] Running
	I0930 10:34:49.882936  576188 system_pods.go:126] duration metric: took 10.846857ms to wait for k8s-apps to be running ...
	I0930 10:34:49.882947  576188 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 10:34:49.883021  576188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:34:49.895565  576188 system_svc.go:56] duration metric: took 12.60696ms WaitForService to wait for kubelet
	I0930 10:34:49.895595  576188 kubeadm.go:582] duration metric: took 2m37.278117729s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:34:49.895621  576188 node_conditions.go:102] verifying NodePressure condition ...
	I0930 10:34:49.898702  576188 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0930 10:34:49.898735  576188 node_conditions.go:123] node cpu capacity is 2
	I0930 10:34:49.898746  576188 node_conditions.go:105] duration metric: took 3.119274ms to run NodePressure ...
	I0930 10:34:49.898785  576188 start.go:241] waiting for startup goroutines ...
	I0930 10:34:49.898799  576188 start.go:246] waiting for cluster config update ...
	I0930 10:34:49.898824  576188 start.go:255] writing updated cluster config ...
	I0930 10:34:49.899193  576188 ssh_runner.go:195] Run: rm -f paused
	I0930 10:34:50.233812  576188 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 10:34:50.238556  576188 out.go:177] * Done! kubectl is now configured to use "addons-718366" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 10:46:40 addons-718366 crio[968]: time="2024-09-30 10:46:40.235843105Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b1229db6-ab68-42cf-9a6a-14d422e4bb03 name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:46:40 addons-718366 crio[968]: time="2024-09-30 10:46:40.236564097Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-tqpln/hello-world-app" id=239bc4a3-5c04-4b9b-ac8f-38315149931d name=/runtime.v1.RuntimeService/CreateContainer
	Sep 30 10:46:40 addons-718366 crio[968]: time="2024-09-30 10:46:40.236658896Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 30 10:46:40 addons-718366 crio[968]: time="2024-09-30 10:46:40.260591680Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/01eaee6a673344c29271eae3e31ec6658472a4f3779a7ac4b6e7d6d9d78f5bcf/merged/etc/passwd: no such file or directory"
	Sep 30 10:46:40 addons-718366 crio[968]: time="2024-09-30 10:46:40.260642813Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/01eaee6a673344c29271eae3e31ec6658472a4f3779a7ac4b6e7d6d9d78f5bcf/merged/etc/group: no such file or directory"
	Sep 30 10:46:40 addons-718366 crio[968]: time="2024-09-30 10:46:40.305044799Z" level=info msg="Created container 34a349e0235902b628a760a8d5683e183e7940ae0df7f37a4edea1918d43f3f9: default/hello-world-app-55bf9c44b4-tqpln/hello-world-app" id=239bc4a3-5c04-4b9b-ac8f-38315149931d name=/runtime.v1.RuntimeService/CreateContainer
	Sep 30 10:46:40 addons-718366 crio[968]: time="2024-09-30 10:46:40.305956883Z" level=info msg="Starting container: 34a349e0235902b628a760a8d5683e183e7940ae0df7f37a4edea1918d43f3f9" id=d62d9318-5e3b-4399-b363-612b7950e748 name=/runtime.v1.RuntimeService/StartContainer
	Sep 30 10:46:40 addons-718366 crio[968]: time="2024-09-30 10:46:40.314631809Z" level=info msg="Started container" PID=13388 containerID=34a349e0235902b628a760a8d5683e183e7940ae0df7f37a4edea1918d43f3f9 description=default/hello-world-app-55bf9c44b4-tqpln/hello-world-app id=d62d9318-5e3b-4399-b363-612b7950e748 name=/runtime.v1.RuntimeService/StartContainer sandboxID=146f54b359ff0b7e76679984d30c8c3638b9d5a22a70015dcf41890638f56fe6
	Sep 30 10:46:40 addons-718366 crio[968]: time="2024-09-30 10:46:40.536700101Z" level=info msg="Removing container: eddd4385f059415600abb03d384c43cf0cddebc5757f1be2189c4f4abe2bd14b" id=f5b379cf-966e-4d64-a759-de4fd3a5b328 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 30 10:46:40 addons-718366 crio[968]: time="2024-09-30 10:46:40.555777460Z" level=info msg="Removed container eddd4385f059415600abb03d384c43cf0cddebc5757f1be2189c4f4abe2bd14b: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=f5b379cf-966e-4d64-a759-de4fd3a5b328 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 30 10:46:42 addons-718366 crio[968]: time="2024-09-30 10:46:42.237148527Z" level=info msg="Stopping container: 5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295 (timeout: 2s)" id=33c9f6dd-0fbd-4b1d-a178-dbf648a25804 name=/runtime.v1.RuntimeService/StopContainer
	Sep 30 10:46:44 addons-718366 crio[968]: time="2024-09-30 10:46:44.243670566Z" level=warning msg="Stopping container 5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=33c9f6dd-0fbd-4b1d-a178-dbf648a25804 name=/runtime.v1.RuntimeService/StopContainer
	Sep 30 10:46:44 addons-718366 conmon[5350]: conmon 5c8d71a37f78818db56d <ninfo>: container 5361 exited with status 137
	Sep 30 10:46:44 addons-718366 crio[968]: time="2024-09-30 10:46:44.382765088Z" level=info msg="Stopped container 5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295: ingress-nginx/ingress-nginx-controller-bc57996ff-7w899/controller" id=33c9f6dd-0fbd-4b1d-a178-dbf648a25804 name=/runtime.v1.RuntimeService/StopContainer
	Sep 30 10:46:44 addons-718366 crio[968]: time="2024-09-30 10:46:44.383327479Z" level=info msg="Stopping pod sandbox: b191e28d983d732f835d1ff431e01669024f4ceee1a6210825549baecb5837b9" id=0dbc53d6-b37d-41c7-a8b7-6b66bec18d0a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 30 10:46:44 addons-718366 crio[968]: time="2024-09-30 10:46:44.386990956Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-TKC2RL6Q5WBCAISP - [0:0]\n:KUBE-HP-GDKGD3R5TBCYOFQK - [0:0]\n-X KUBE-HP-TKC2RL6Q5WBCAISP\n-X KUBE-HP-GDKGD3R5TBCYOFQK\nCOMMIT\n"
	Sep 30 10:46:44 addons-718366 crio[968]: time="2024-09-30 10:46:44.389601412Z" level=info msg="Closing host port tcp:80"
	Sep 30 10:46:44 addons-718366 crio[968]: time="2024-09-30 10:46:44.389657280Z" level=info msg="Closing host port tcp:443"
	Sep 30 10:46:44 addons-718366 crio[968]: time="2024-09-30 10:46:44.391142266Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 30 10:46:44 addons-718366 crio[968]: time="2024-09-30 10:46:44.391172025Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 30 10:46:44 addons-718366 crio[968]: time="2024-09-30 10:46:44.391359171Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-7w899 Namespace:ingress-nginx ID:b191e28d983d732f835d1ff431e01669024f4ceee1a6210825549baecb5837b9 UID:f8973837-432c-4179-90fb-061f9cdc391b NetNS:/var/run/netns/6909c899-b991-41f2-b50f-c30ab2cc84fe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 30 10:46:44 addons-718366 crio[968]: time="2024-09-30 10:46:44.391498893Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-7w899 from CNI network \"kindnet\" (type=ptp)"
	Sep 30 10:46:44 addons-718366 crio[968]: time="2024-09-30 10:46:44.419198872Z" level=info msg="Stopped pod sandbox: b191e28d983d732f835d1ff431e01669024f4ceee1a6210825549baecb5837b9" id=0dbc53d6-b37d-41c7-a8b7-6b66bec18d0a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 30 10:46:44 addons-718366 crio[968]: time="2024-09-30 10:46:44.550255187Z" level=info msg="Removing container: 5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295" id=9dbc9d1d-6f1e-481e-a54f-7f0ff6957336 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 30 10:46:44 addons-718366 crio[968]: time="2024-09-30 10:46:44.568960732Z" level=info msg="Removed container 5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295: ingress-nginx/ingress-nginx-controller-bc57996ff-7w899/controller" id=9dbc9d1d-6f1e-481e-a54f-7f0ff6957336 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	34a349e023590       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app            0                   146f54b359ff0       hello-world-app-55bf9c44b4-tqpln
	890353217af2d       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                              2 minutes ago       Running             nginx                      0                   3cdfaa97914c9       nginx
	f7e963bb19262       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 13 minutes ago      Running             gcp-auth                   0                   c248cf7b4c141       gcp-auth-89d5ffd79-4zcrm
	2babe6190e81a       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              13 minutes ago      Running             yakd                       0                   d17238ba7148c       yakd-dashboard-67d98fc6b-bjwsv
	f6af04ac4123e       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                     13 minutes ago      Running             nvidia-device-plugin-ctr   0                   7540dac0ccdc7       nvidia-device-plugin-daemonset-4vhfz
	2db2a4c8a8db1       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             13 minutes ago      Exited              patch                      1                   b7854eba67a5d       ingress-nginx-admission-patch-t2fbr
	796bea4cf337f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   13 minutes ago      Exited              create                     0                   c25003ea82c06       ingress-nginx-admission-create-m827b
	d5d130ee164aa       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             13 minutes ago      Running             local-path-provisioner     0                   3b673820921ac       local-path-provisioner-86d989889c-stxvp
	7f69273237e01       gcr.io/cloud-spanner-emulator/emulator@sha256:6ce1265c73355797b34d2531c7146eed3996346f860517e35d1434182eb5f01d               13 minutes ago      Running             cloud-spanner-emulator     0                   f2b1f198facf7       cloud-spanner-emulator-5b584cc74-jgnx2
	7f25834811580       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        13 minutes ago      Running             metrics-server             0                   3898176a51cc9       metrics-server-84c5f94fbc-jqf86
	8564280a03e37       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             13 minutes ago      Running             storage-provisioner        0                   8ea3828da6af2       storage-provisioner
	8970b526b14d3       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             13 minutes ago      Running             coredns                    0                   fc8f26b163074       coredns-7c65d6cfc9-dtmzl
	97d43354c9c18       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             14 minutes ago      Running             kindnet-cni                0                   b504155dced4a       kindnet-cx2x5
	d94629297e53a       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             14 minutes ago      Running             kube-proxy                 0                   42061a7582848       kube-proxy-6d7ts
	f46dcd2ffd212       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             14 minutes ago      Running             kube-scheduler             0                   4328be32dbdda       kube-scheduler-addons-718366
	8427a90f7890f       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             14 minutes ago      Running             kube-controller-manager    0                   61fcf6c446cf3       kube-controller-manager-addons-718366
	162d3240be19c       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             14 minutes ago      Running             kube-apiserver             0                   c729a09320dc3       kube-apiserver-addons-718366
	c0e6564b9b165       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             14 minutes ago      Running             etcd                       0                   e280627a20055       etcd-addons-718366
	
	
	==> coredns [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e] <==
	[INFO] 10.244.0.16:51134 - 3862 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00009631s
	[INFO] 10.244.0.16:51134 - 28713 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002595396s
	[INFO] 10.244.0.16:51134 - 38278 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002469697s
	[INFO] 10.244.0.16:51134 - 60275 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000123813s
	[INFO] 10.244.0.16:51134 - 20504 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000921133s
	[INFO] 10.244.0.16:42758 - 57933 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000109519s
	[INFO] 10.244.0.16:42758 - 58163 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000222452s
	[INFO] 10.244.0.16:46219 - 49034 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000046514s
	[INFO] 10.244.0.16:46219 - 48861 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090599s
	[INFO] 10.244.0.16:38841 - 23335 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049238s
	[INFO] 10.244.0.16:38841 - 23162 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000120719s
	[INFO] 10.244.0.16:36810 - 56287 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001370566s
	[INFO] 10.244.0.16:36810 - 56459 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001599623s
	[INFO] 10.244.0.16:53737 - 45454 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071399s
	[INFO] 10.244.0.16:53737 - 45306 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067083s
	[INFO] 10.244.0.19:60043 - 33793 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000164099s
	[INFO] 10.244.0.19:45569 - 24882 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00008945s
	[INFO] 10.244.0.19:37408 - 63394 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120645s
	[INFO] 10.244.0.19:32799 - 53535 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000581761s
	[INFO] 10.244.0.19:55061 - 24202 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127028s
	[INFO] 10.244.0.19:52877 - 28567 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000066952s
	[INFO] 10.244.0.19:41260 - 35512 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00224238s
	[INFO] 10.244.0.19:50161 - 49874 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002246064s
	[INFO] 10.244.0.19:55943 - 60460 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001310399s
	[INFO] 10.244.0.19:51706 - 64277 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001688868s
	
	
	==> describe nodes <==
	Name:               addons-718366
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-718366
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=addons-718366
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T10_32_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-718366
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 10:32:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-718366
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 10:46:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 10:44:43 +0000   Mon, 30 Sep 2024 10:32:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 10:44:43 +0000   Mon, 30 Sep 2024 10:32:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 10:44:43 +0000   Mon, 30 Sep 2024 10:32:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 10:44:43 +0000   Mon, 30 Sep 2024 10:32:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-718366
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 77d89a06001e4e62a34a490bff9aa946
	  System UUID:                905a5f23-cdd8-48a6-a301-0dc3d894de03
	  Boot ID:                    cd5783c9-92b8-4cba-8495-065a6f022f89
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     cloud-spanner-emulator-5b584cc74-jgnx2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-tqpln           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gcp-auth                    gcp-auth-89d5ffd79-4zcrm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-dtmzl                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-addons-718366                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-cx2x5                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-718366               250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-718366      200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-6d7ts                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-718366               100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-jqf86            100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         14m
	  kube-system                 nvidia-device-plugin-daemonset-4vhfz       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-stxvp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-bjwsv             0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             548Mi (6%)  476Mi (6%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node addons-718366 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node addons-718366 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node addons-718366 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  14m                kubelet          Node addons-718366 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                kubelet          Node addons-718366 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                kubelet          Node addons-718366 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                node-controller  Node addons-718366 event: Registered Node addons-718366 in Controller
	  Normal   NodeReady                13m                kubelet          Node addons-718366 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep30 09:39] IPVS: rr: TCP 192.168.49.254:8443 - no destination available
	[Sep30 10:06] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70] <==
	{"level":"warn","ts":"2024-09-30T10:32:15.963806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"502.065606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T10:32:15.963874Z","caller":"traceutil/trace.go:171","msg":"trace[1639210312] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:392; }","duration":"502.169636ms","start":"2024-09-30T10:32:15.461690Z","end":"2024-09-30T10:32:15.963860Z","steps":["trace[1639210312] 'agreement among raft nodes before linearized reading'  (duration: 340.590554ms)","trace[1639210312] 'range keys from in-memory index tree'  (duration: 161.458364ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:15.963905Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T10:32:15.461681Z","time spent":"502.21679ms","remote":"127.0.0.1:47722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":0,"response size":29,"request content":"key:\"/registry/deployments/kube-system/registry\" "}
	{"level":"warn","ts":"2024-09-30T10:32:15.975672Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.962851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T10:32:15.975744Z","caller":"traceutil/trace.go:171","msg":"trace[1497020414] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:392; }","duration":"514.06067ms","start":"2024-09-30T10:32:15.461669Z","end":"2024-09-30T10:32:15.975729Z","steps":["trace[1497020414] 'agreement among raft nodes before linearized reading'  (duration: 340.620576ms)","trace[1497020414] 'range keys from in-memory index tree'  (duration: 173.328327ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:15.975777Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T10:32:15.461648Z","time spent":"514.122937ms","remote":"127.0.0.1:47722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":0,"response size":29,"request content":"key:\"/registry/deployments/default/cloud-spanner-emulator\" "}
	{"level":"warn","ts":"2024-09-30T10:32:15.976129Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"543.814465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-cx2x5\" ","response":"range_response_count:1 size:5102"}
	{"level":"info","ts":"2024-09-30T10:32:15.976175Z","caller":"traceutil/trace.go:171","msg":"trace[164340211] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-cx2x5; range_end:; response_count:1; response_revision:392; }","duration":"543.863112ms","start":"2024-09-30T10:32:15.432303Z","end":"2024-09-30T10:32:15.976166Z","steps":["trace[164340211] 'agreement among raft nodes before linearized reading'  (duration: 369.990963ms)","trace[164340211] 'range keys from in-memory index tree'  (duration: 173.801529ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:15.976202Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T10:32:15.432284Z","time spent":"543.912866ms","remote":"127.0.0.1:47438","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":5126,"request content":"key:\"/registry/pods/kube-system/kindnet-cx2x5\" "}
	{"level":"warn","ts":"2024-09-30T10:32:15.993928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.619432ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032241754536841 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-6d7ts.17f9ff067a572d5a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-6d7ts.17f9ff067a572d5a\" value_size:634 lease:8128032241754536491 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-30T10:32:16.005403Z","caller":"traceutil/trace.go:171","msg":"trace[221753668] linearizableReadLoop","detail":"{readStateIndex:402; appliedIndex:401; }","duration":"151.390807ms","start":"2024-09-30T10:32:15.853990Z","end":"2024-09-30T10:32:16.005381Z","steps":["trace[221753668] 'read index received'  (duration: 34.289µs)","trace[221753668] 'applied index is now lower than readState.Index'  (duration: 151.353163ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T10:32:16.025685Z","caller":"traceutil/trace.go:171","msg":"trace[1564697964] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"248.129407ms","start":"2024-09-30T10:32:15.777531Z","end":"2024-09-30T10:32:16.025660Z","steps":["trace[1564697964] 'process raft request'  (duration: 44.642585ms)","trace[1564697964] 'compare'  (duration: 50.754322ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:16.045362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.356368ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T10:32:16.056252Z","caller":"traceutil/trace.go:171","msg":"trace[667812772] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:393; }","duration":"202.239182ms","start":"2024-09-30T10:32:15.853984Z","end":"2024-09-30T10:32:16.056223Z","steps":["trace[667812772] 'agreement among raft nodes before linearized reading'  (duration: 191.330883ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.205755Z","caller":"traceutil/trace.go:171","msg":"trace[685107055] linearizableReadLoop","detail":"{readStateIndex:407; appliedIndex:402; }","duration":"111.750218ms","start":"2024-09-30T10:32:16.093990Z","end":"2024-09-30T10:32:16.205740Z","steps":["trace[685107055] 'read index received'  (duration: 110.159088ms)","trace[685107055] 'applied index is now lower than readState.Index'  (duration: 1.59063ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:16.205871Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.850417ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-30T10:32:16.205893Z","caller":"traceutil/trace.go:171","msg":"trace[1709110772] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:398; }","duration":"111.899655ms","start":"2024-09-30T10:32:16.093987Z","end":"2024-09-30T10:32:16.205887Z","steps":["trace[1709110772] 'agreement among raft nodes before linearized reading'  (duration: 111.814619ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.206103Z","caller":"traceutil/trace.go:171","msg":"trace[320854091] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"112.610299ms","start":"2024-09-30T10:32:16.093485Z","end":"2024-09-30T10:32:16.206096Z","steps":["trace[320854091] 'process raft request'  (duration: 112.105081ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.206227Z","caller":"traceutil/trace.go:171","msg":"trace[273332653] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"112.693685ms","start":"2024-09-30T10:32:16.093527Z","end":"2024-09-30T10:32:16.206221Z","steps":["trace[273332653] 'process raft request'  (duration: 112.132609ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.206323Z","caller":"traceutil/trace.go:171","msg":"trace[2053403231] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"112.580695ms","start":"2024-09-30T10:32:16.093736Z","end":"2024-09-30T10:32:16.206316Z","steps":["trace[2053403231] 'process raft request'  (duration: 111.949688ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.206367Z","caller":"traceutil/trace.go:171","msg":"trace[754986013] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"112.543945ms","start":"2024-09-30T10:32:16.093817Z","end":"2024-09-30T10:32:16.206361Z","steps":["trace[754986013] 'process raft request'  (duration: 111.897775ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.214557Z","caller":"traceutil/trace.go:171","msg":"trace[1253211950] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"121.318358ms","start":"2024-09-30T10:32:16.093218Z","end":"2024-09-30T10:32:16.214537Z","steps":["trace[1253211950] 'process raft request'  (duration: 110.795453ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:42:02.621285Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1521}
	{"level":"info","ts":"2024-09-30T10:42:02.650830Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1521,"took":"28.997753ms","hash":1350178088,"current-db-size-bytes":6029312,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3149824,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2024-09-30T10:42:02.650884Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1350178088,"revision":1521,"compact-revision":-1}
	
	
	==> gcp-auth [f7e963bb19262a09a16f93679318ed840adcaedcbd6e1425db0d952add42b6b6] <==
	2024/09/30 10:33:47 GCP Auth Webhook started!
	2024/09/30 10:34:50 Ready to marshal response ...
	2024/09/30 10:34:50 Ready to write response ...
	2024/09/30 10:34:50 Ready to marshal response ...
	2024/09/30 10:34:50 Ready to write response ...
	2024/09/30 10:34:50 Ready to marshal response ...
	2024/09/30 10:34:50 Ready to write response ...
	2024/09/30 10:42:54 Ready to marshal response ...
	2024/09/30 10:42:54 Ready to write response ...
	2024/09/30 10:42:54 Ready to marshal response ...
	2024/09/30 10:42:54 Ready to write response ...
	2024/09/30 10:42:54 Ready to marshal response ...
	2024/09/30 10:42:54 Ready to write response ...
	2024/09/30 10:43:04 Ready to marshal response ...
	2024/09/30 10:43:04 Ready to write response ...
	2024/09/30 10:43:15 Ready to marshal response ...
	2024/09/30 10:43:15 Ready to write response ...
	2024/09/30 10:43:36 Ready to marshal response ...
	2024/09/30 10:43:36 Ready to write response ...
	2024/09/30 10:44:20 Ready to marshal response ...
	2024/09/30 10:44:20 Ready to write response ...
	2024/09/30 10:46:38 Ready to marshal response ...
	2024/09/30 10:46:38 Ready to write response ...
	
	
	==> kernel <==
	 10:46:49 up 1 day, 10:29,  0 users,  load average: 0.27, 0.64, 1.38
	Linux addons-718366 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c] <==
	I0930 10:44:45.386884       1 main.go:299] handling current node
	I0930 10:44:55.386753       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:44:55.386787       1 main.go:299] handling current node
	I0930 10:45:05.386782       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:45:05.386819       1 main.go:299] handling current node
	I0930 10:45:15.387445       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:45:15.387482       1 main.go:299] handling current node
	I0930 10:45:25.392907       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:45:25.393045       1 main.go:299] handling current node
	I0930 10:45:35.394112       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:45:35.394231       1 main.go:299] handling current node
	I0930 10:45:45.387812       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:45:45.387849       1 main.go:299] handling current node
	I0930 10:45:55.389284       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:45:55.389407       1 main.go:299] handling current node
	I0930 10:46:05.388813       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:46:05.388847       1 main.go:299] handling current node
	I0930 10:46:15.387389       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:46:15.387430       1 main.go:299] handling current node
	I0930 10:46:25.392505       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:46:25.392637       1 main.go:299] handling current node
	I0930 10:46:35.395932       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:46:35.395964       1 main.go:299] handling current node
	I0930 10:46:45.386765       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:46:45.386887       1 main.go:299] handling current node
	
	
	==> kube-apiserver [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b] <==
	 > logger="UnhandledError"
	E0930 10:34:15.885240       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.37.71:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.37.71:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.37.71:443: connect: connection refused" logger="UnhandledError"
	I0930 10:34:15.933537       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0930 10:34:15.944252       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0930 10:42:54.890805       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.140.135"}
	I0930 10:43:26.331792       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0930 10:43:44.099917       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0930 10:43:51.726819       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.727276       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:43:51.753391       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.753536       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:43:51.783338       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.783491       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:43:51.824695       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.824740       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:43:51.863148       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.863276       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0930 10:43:52.825369       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0930 10:43:52.863903       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0930 10:43:52.910514       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0930 10:44:14.305406       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0930 10:44:15.424396       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0930 10:44:19.885804       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0930 10:44:20.216244       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.7.114"}
	I0930 10:46:39.146523       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.76.220"}
	
	
	==> kube-controller-manager [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce] <==
	E0930 10:45:23.434336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:45:43.323164       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:45:43.323206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:45:56.108635       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:45:56.108682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:45:58.766180       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:45:58.766224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:45:59.055359       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:45:59.055399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:46:37.468440       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:46:37.468483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:46:37.644914       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:46:37.644960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:46:38.958465       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="76.635846ms"
	I0930 10:46:38.990153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.630182ms"
	I0930 10:46:38.990301       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.547µs"
	W0930 10:46:39.538516       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:46:39.538562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:46:40.603978       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="24.065416ms"
	I0930 10:46:40.604167       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="54.809µs"
	I0930 10:46:41.203934       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0930 10:46:41.208592       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="6.284µs"
	I0930 10:46:41.215444       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0930 10:46:42.828351       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:46:42.828393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6] <==
	I0930 10:32:17.422602       1 server_linux.go:66] "Using iptables proxy"
	I0930 10:32:18.022042       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0930 10:32:18.046247       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 10:32:18.422271       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0930 10:32:18.422424       1 server_linux.go:169] "Using iptables Proxier"
	I0930 10:32:18.435427       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 10:32:18.436261       1 server.go:483] "Version info" version="v1.31.1"
	I0930 10:32:18.436290       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:32:18.439117       1 config.go:199] "Starting service config controller"
	I0930 10:32:18.439168       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 10:32:18.439259       1 config.go:105] "Starting endpoint slice config controller"
	I0930 10:32:18.439272       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 10:32:18.439784       1 config.go:328] "Starting node config controller"
	I0930 10:32:18.439802       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 10:32:18.539827       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 10:32:18.539944       1 shared_informer.go:320] Caches are synced for service config
	I0930 10:32:18.539974       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf] <==
	I0930 10:32:05.100973       1 serving.go:386] Generated self-signed cert in-memory
	W0930 10:32:06.502704       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 10:32:06.502825       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 10:32:06.502860       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 10:32:06.502922       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 10:32:06.525286       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 10:32:06.527188       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:32:06.529906       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 10:32:06.530137       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 10:32:06.530163       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 10:32:06.530376       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0930 10:32:06.535468       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 10:32:06.535769       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0930 10:32:07.631231       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 10:46:39 addons-718366 kubelet[1518]: E0930 10:46:39.858007    1518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="df0189ce-cfa6-4fcb-9cb0-001e99817661"
	Sep 30 10:46:40 addons-718366 kubelet[1518]: I0930 10:46:40.264108    1518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67mrt\" (UniqueName: \"kubernetes.io/projected/201cdd5a-777d-406a-a3c3-ae55dfa26b03-kube-api-access-67mrt\") pod \"201cdd5a-777d-406a-a3c3-ae55dfa26b03\" (UID: \"201cdd5a-777d-406a-a3c3-ae55dfa26b03\") "
	Sep 30 10:46:40 addons-718366 kubelet[1518]: I0930 10:46:40.272425    1518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201cdd5a-777d-406a-a3c3-ae55dfa26b03-kube-api-access-67mrt" (OuterVolumeSpecName: "kube-api-access-67mrt") pod "201cdd5a-777d-406a-a3c3-ae55dfa26b03" (UID: "201cdd5a-777d-406a-a3c3-ae55dfa26b03"). InnerVolumeSpecName "kube-api-access-67mrt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:46:40 addons-718366 kubelet[1518]: I0930 10:46:40.365247    1518 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-67mrt\" (UniqueName: \"kubernetes.io/projected/201cdd5a-777d-406a-a3c3-ae55dfa26b03-kube-api-access-67mrt\") on node \"addons-718366\" DevicePath \"\""
	Sep 30 10:46:40 addons-718366 kubelet[1518]: I0930 10:46:40.534824    1518 scope.go:117] "RemoveContainer" containerID="eddd4385f059415600abb03d384c43cf0cddebc5757f1be2189c4f4abe2bd14b"
	Sep 30 10:46:40 addons-718366 kubelet[1518]: I0930 10:46:40.556045    1518 scope.go:117] "RemoveContainer" containerID="eddd4385f059415600abb03d384c43cf0cddebc5757f1be2189c4f4abe2bd14b"
	Sep 30 10:46:40 addons-718366 kubelet[1518]: E0930 10:46:40.556490    1518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eddd4385f059415600abb03d384c43cf0cddebc5757f1be2189c4f4abe2bd14b\": container with ID starting with eddd4385f059415600abb03d384c43cf0cddebc5757f1be2189c4f4abe2bd14b not found: ID does not exist" containerID="eddd4385f059415600abb03d384c43cf0cddebc5757f1be2189c4f4abe2bd14b"
	Sep 30 10:46:40 addons-718366 kubelet[1518]: I0930 10:46:40.556528    1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eddd4385f059415600abb03d384c43cf0cddebc5757f1be2189c4f4abe2bd14b"} err="failed to get container status \"eddd4385f059415600abb03d384c43cf0cddebc5757f1be2189c4f4abe2bd14b\": rpc error: code = NotFound desc = could not find container \"eddd4385f059415600abb03d384c43cf0cddebc5757f1be2189c4f4abe2bd14b\": container with ID starting with eddd4385f059415600abb03d384c43cf0cddebc5757f1be2189c4f4abe2bd14b not found: ID does not exist"
	Sep 30 10:46:41 addons-718366 kubelet[1518]: I0930 10:46:41.221035    1518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-tqpln" podStartSLOduration=2.298981157 podStartE2EDuration="3.221011108s" podCreationTimestamp="2024-09-30 10:46:38 +0000 UTC" firstStartedPulling="2024-09-30 10:46:39.312482988 +0000 UTC m=+871.578678813" lastFinishedPulling="2024-09-30 10:46:40.234512888 +0000 UTC m=+872.500708764" observedRunningTime="2024-09-30 10:46:40.580318992 +0000 UTC m=+872.846514818" watchObservedRunningTime="2024-09-30 10:46:41.221011108 +0000 UTC m=+873.487206942"
	Sep 30 10:46:41 addons-718366 kubelet[1518]: I0930 10:46:41.858685    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="201cdd5a-777d-406a-a3c3-ae55dfa26b03" path="/var/lib/kubelet/pods/201cdd5a-777d-406a-a3c3-ae55dfa26b03/volumes"
	Sep 30 10:46:41 addons-718366 kubelet[1518]: I0930 10:46:41.859078    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="837dec2f-bde8-4397-97a0-04d700a97948" path="/var/lib/kubelet/pods/837dec2f-bde8-4397-97a0-04d700a97948/volumes"
	Sep 30 10:46:41 addons-718366 kubelet[1518]: I0930 10:46:41.859433    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3dc566-2d1f-4097-969c-f59a6edecae8" path="/var/lib/kubelet/pods/9d3dc566-2d1f-4097-969c-f59a6edecae8/volumes"
	Sep 30 10:46:44 addons-718366 kubelet[1518]: I0930 10:46:44.487767    1518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6jcf\" (UniqueName: \"kubernetes.io/projected/f8973837-432c-4179-90fb-061f9cdc391b-kube-api-access-w6jcf\") pod \"f8973837-432c-4179-90fb-061f9cdc391b\" (UID: \"f8973837-432c-4179-90fb-061f9cdc391b\") "
	Sep 30 10:46:44 addons-718366 kubelet[1518]: I0930 10:46:44.487830    1518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8973837-432c-4179-90fb-061f9cdc391b-webhook-cert\") pod \"f8973837-432c-4179-90fb-061f9cdc391b\" (UID: \"f8973837-432c-4179-90fb-061f9cdc391b\") "
	Sep 30 10:46:44 addons-718366 kubelet[1518]: I0930 10:46:44.490352    1518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8973837-432c-4179-90fb-061f9cdc391b-kube-api-access-w6jcf" (OuterVolumeSpecName: "kube-api-access-w6jcf") pod "f8973837-432c-4179-90fb-061f9cdc391b" (UID: "f8973837-432c-4179-90fb-061f9cdc391b"). InnerVolumeSpecName "kube-api-access-w6jcf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:46:44 addons-718366 kubelet[1518]: I0930 10:46:44.494252    1518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8973837-432c-4179-90fb-061f9cdc391b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f8973837-432c-4179-90fb-061f9cdc391b" (UID: "f8973837-432c-4179-90fb-061f9cdc391b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 30 10:46:44 addons-718366 kubelet[1518]: I0930 10:46:44.548484    1518 scope.go:117] "RemoveContainer" containerID="5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295"
	Sep 30 10:46:44 addons-718366 kubelet[1518]: I0930 10:46:44.569209    1518 scope.go:117] "RemoveContainer" containerID="5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295"
	Sep 30 10:46:44 addons-718366 kubelet[1518]: E0930 10:46:44.570019    1518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295\": container with ID starting with 5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295 not found: ID does not exist" containerID="5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295"
	Sep 30 10:46:44 addons-718366 kubelet[1518]: I0930 10:46:44.570066    1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295"} err="failed to get container status \"5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295\": rpc error: code = NotFound desc = could not find container \"5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295\": container with ID starting with 5c8d71a37f78818db56d0c5df7693620a70f44d78edf313d9feaa61cadf35295 not found: ID does not exist"
	Sep 30 10:46:44 addons-718366 kubelet[1518]: I0930 10:46:44.588604    1518 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w6jcf\" (UniqueName: \"kubernetes.io/projected/f8973837-432c-4179-90fb-061f9cdc391b-kube-api-access-w6jcf\") on node \"addons-718366\" DevicePath \"\""
	Sep 30 10:46:44 addons-718366 kubelet[1518]: I0930 10:46:44.588642    1518 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8973837-432c-4179-90fb-061f9cdc391b-webhook-cert\") on node \"addons-718366\" DevicePath \"\""
	Sep 30 10:46:45 addons-718366 kubelet[1518]: I0930 10:46:45.858096    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8973837-432c-4179-90fb-061f9cdc391b" path="/var/lib/kubelet/pods/f8973837-432c-4179-90fb-061f9cdc391b/volumes"
	Sep 30 10:46:48 addons-718366 kubelet[1518]: E0930 10:46:48.179568    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693208179334380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:547957,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 10:46:48 addons-718366 kubelet[1518]: E0930 10:46:48.179607    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693208179334380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:547957,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8564280a03e37716b0a9e9a9f7d87bbde241c67a46dcec2bb762772d073dec52] <==
	I0930 10:32:56.545004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 10:32:56.563543       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 10:32:56.563600       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 10:32:56.576241       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 10:32:56.576984       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-718366_ea9e4a9f-f89a-497b-a662-d047c0307409!
	I0930 10:32:56.576481       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a56189b9-c62a-4b37-a064-2fefbb3251ee", APIVersion:"v1", ResourceVersion:"910", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-718366_ea9e4a9f-f89a-497b-a662-d047c0307409 became leader
	I0930 10:32:56.677903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-718366_ea9e4a9f-f89a-497b-a662-d047c0307409!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-718366 -n addons-718366
helpers_test.go:261: (dbg) Run:  kubectl --context addons-718366 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-718366 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-718366 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-718366/192.168.49.2
	Start Time:       Mon, 30 Sep 2024 10:34:50 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q78z7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-q78z7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  12m                  default-scheduler  Successfully assigned default/busybox to addons-718366
	  Normal   Pulling    10m (x4 over 11m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)    kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 11m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    111s (x44 over 11m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (331.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 11.811686ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-jqf86" [37c7c588-691f-43b1-bc7e-d9d29b8c740e] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00355085s
addons_test.go:413: (dbg) Run:  kubectl --context addons-718366 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-718366 top pods -n kube-system: exit status 1 (95.682932ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dtmzl, age: 11m45.2599488s

                                                
                                                
** /stderr **
I0930 10:43:58.262902  575428 retry.go:31] will retry after 4.348948441s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-718366 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-718366 top pods -n kube-system: exit status 1 (87.844179ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dtmzl, age: 11m49.698659708s

                                                
                                                
** /stderr **
I0930 10:44:02.701502  575428 retry.go:31] will retry after 3.977811284s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-718366 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-718366 top pods -n kube-system: exit status 1 (124.492091ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dtmzl, age: 11m53.801463064s

                                                
                                                
** /stderr **
I0930 10:44:06.804891  575428 retry.go:31] will retry after 8.880863163s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-718366 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-718366 top pods -n kube-system: exit status 1 (86.722066ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dtmzl, age: 12m2.768909026s

                                                
                                                
** /stderr **
I0930 10:44:15.772849  575428 retry.go:31] will retry after 5.256440091s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-718366 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-718366 top pods -n kube-system: exit status 1 (98.615884ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dtmzl, age: 12m8.124553942s

                                                
                                                
** /stderr **
I0930 10:44:21.128183  575428 retry.go:31] will retry after 14.268497247s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-718366 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-718366 top pods -n kube-system: exit status 1 (90.68532ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dtmzl, age: 12m22.484740249s

                                                
                                                
** /stderr **
I0930 10:44:35.488527  575428 retry.go:31] will retry after 27.104009703s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-718366 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-718366 top pods -n kube-system: exit status 1 (92.400555ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dtmzl, age: 12m49.683114515s

                                                
                                                
** /stderr **
I0930 10:45:02.686213  575428 retry.go:31] will retry after 35.268700655s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-718366 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-718366 top pods -n kube-system: exit status 1 (89.71259ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dtmzl, age: 13m25.042117037s

                                                
                                                
** /stderr **
I0930 10:45:38.045799  575428 retry.go:31] will retry after 31.146082023s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-718366 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-718366 top pods -n kube-system: exit status 1 (92.610336ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dtmzl, age: 13m56.281028095s

                                                
                                                
** /stderr **
I0930 10:46:09.284839  575428 retry.go:31] will retry after 1m10.697949387s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-718366 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-718366 top pods -n kube-system: exit status 1 (86.897264ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dtmzl, age: 15m7.067474416s

                                                
                                                
** /stderr **
I0930 10:47:20.070710  575428 retry.go:31] will retry after 52.836427563s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-718366 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-718366 top pods -n kube-system: exit status 1 (99.893197ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dtmzl, age: 16m0.005167256s

                                                
                                                
** /stderr **
I0930 10:48:13.008838  575428 retry.go:31] will retry after 1m7.63914841s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-718366 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-718366 top pods -n kube-system: exit status 1 (81.061348ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dtmzl, age: 17m7.728322481s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-718366
helpers_test.go:235: (dbg) docker inspect addons-718366:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894",
	        "Created": "2024-09-30T10:31:43.905448896Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 576683,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-30T10:31:44.063796451Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894/hosts",
	        "LogPath": "/var/lib/docker/containers/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894/ed341e1151f0b375702bd875565bf1dd25eb125eef5df58b7589b95d981d2894-json.log",
	        "Name": "/addons-718366",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-718366:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-718366",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3b0a01dc9769f647e2fa9445d5bf4e9ab2e1115d9e0e5acff67a45091631ce64-init/diff:/var/lib/docker/overlay2/89114fb86e05dfc705528dc965d39dcbdae2b3c32ee9939bb163740716767303/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b0a01dc9769f647e2fa9445d5bf4e9ab2e1115d9e0e5acff67a45091631ce64/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b0a01dc9769f647e2fa9445d5bf4e9ab2e1115d9e0e5acff67a45091631ce64/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b0a01dc9769f647e2fa9445d5bf4e9ab2e1115d9e0e5acff67a45091631ce64/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-718366",
	                "Source": "/var/lib/docker/volumes/addons-718366/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-718366",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-718366",
	                "name.minikube.sigs.k8s.io": "addons-718366",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4000b12ceab08239f17e20c17eb46f041a0a6e684a414119cdec0d3429928e0b",
	            "SandboxKey": "/var/run/docker/netns/4000b12ceab0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38988"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38989"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38990"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38991"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-718366": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "49bb2287327a5d5bf19993c7fe6d9348c5cc91efc29c195f3a50d6290c89924e",
	                    "EndpointID": "a3d75320f00be0ed0cbab5bc16e3263619548cfeae3e76a58471414489bf0190",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-718366",
	                        "ed341e1151f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-718366 -n addons-718366
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-718366 logs -n 25: (1.543935873s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-575153                                                                     | download-only-575153   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| start   | --download-only -p                                                                          | download-docker-121895 | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | download-docker-121895                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-121895                                                                   | download-docker-121895 | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-919874   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | binary-mirror-919874                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44655                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-919874                                                                     | binary-mirror-919874   | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| addons  | enable dashboard -p                                                                         | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | addons-718366                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | addons-718366                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-718366 --wait=true                                                                | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:34 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:42 UTC | 30 Sep 24 10:42 UTC |
	|         | -p addons-718366                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-718366 addons disable                                                                | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:43 UTC | 30 Sep 24 10:43 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-718366 addons                                                                        | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:43 UTC | 30 Sep 24 10:43 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-718366 addons                                                                        | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:43 UTC | 30 Sep 24 10:43 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-718366 ip                                                                            | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:44 UTC | 30 Sep 24 10:44 UTC |
	| addons  | addons-718366 addons disable                                                                | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:44 UTC | 30 Sep 24 10:44 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:44 UTC | 30 Sep 24 10:44 UTC |
	|         | addons-718366                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-718366 ssh curl -s                                                                   | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:44 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-718366 ip                                                                            | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:46 UTC | 30 Sep 24 10:46 UTC |
	| addons  | addons-718366 addons disable                                                                | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:46 UTC | 30 Sep 24 10:46 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-718366 addons disable                                                                | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:46 UTC | 30 Sep 24 10:46 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:46 UTC | 30 Sep 24 10:46 UTC |
	|         | -p addons-718366                                                                            |                        |         |         |                     |                     |
	| addons  | addons-718366 addons disable                                                                | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:47 UTC | 30 Sep 24 10:47 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-718366 ssh cat                                                                       | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:47 UTC | 30 Sep 24 10:47 UTC |
	|         | /opt/local-path-provisioner/pvc-271312af-c6d1-4918-84a6-e0da61228c61_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-718366 addons disable                                                                | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:47 UTC | 30 Sep 24 10:48 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:48 UTC | 30 Sep 24 10:48 UTC |
	|         | addons-718366                                                                               |                        |         |         |                     |                     |
	| addons  | addons-718366 addons                                                                        | addons-718366          | jenkins | v1.34.0 | 30 Sep 24 10:49 UTC | 30 Sep 24 10:49 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:31:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:31:19.588253  576188 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:31:19.588435  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:31:19.588464  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:31:19.588483  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:31:19.588757  576188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
	I0930 10:31:19.589326  576188 out.go:352] Setting JSON to false
	I0930 10:31:19.590293  576188 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":123226,"bootTime":1727569054,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0930 10:31:19.590400  576188 start.go:139] virtualization:  
	I0930 10:31:19.592475  576188 out.go:177] * [addons-718366] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 10:31:19.593683  576188 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:31:19.593737  576188 notify.go:220] Checking for updates...
	I0930 10:31:19.596014  576188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:31:19.597688  576188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	I0930 10:31:19.598789  576188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	I0930 10:31:19.600169  576188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 10:31:19.601274  576188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:31:19.602931  576188 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:31:19.624953  576188 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:31:19.625081  576188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:31:19.686322  576188 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-30 10:31:19.676149404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:31:19.686454  576188 docker.go:318] overlay module found
	I0930 10:31:19.688493  576188 out.go:177] * Using the docker driver based on user configuration
	I0930 10:31:19.689696  576188 start.go:297] selected driver: docker
	I0930 10:31:19.689712  576188 start.go:901] validating driver "docker" against <nil>
	I0930 10:31:19.689727  576188 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:31:19.690364  576188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:31:19.737739  576188 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-30 10:31:19.72812774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:31:19.737977  576188 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:31:19.738212  576188 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:31:19.739656  576188 out.go:177] * Using Docker driver with root privileges
	I0930 10:31:19.740990  576188 cni.go:84] Creating CNI manager for ""
	I0930 10:31:19.741052  576188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0930 10:31:19.741072  576188 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 10:31:19.741162  576188 start.go:340] cluster config:
	{Name:addons-718366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:31:19.743023  576188 out.go:177] * Starting "addons-718366" primary control-plane node in "addons-718366" cluster
	I0930 10:31:19.743990  576188 cache.go:121] Beginning downloading kic base image for docker with crio
	I0930 10:31:19.745206  576188 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0930 10:31:19.746898  576188 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 10:31:19.746949  576188 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0930 10:31:19.746962  576188 cache.go:56] Caching tarball of preloaded images
	I0930 10:31:19.747074  576188 preload.go:172] Found /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0930 10:31:19.747089  576188 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 10:31:19.747446  576188 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/config.json ...
	I0930 10:31:19.747510  576188 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0930 10:31:19.747474  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/config.json: {Name:mk2af656d2be7cf8581e9e41a4766db590e98cab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:19.763017  576188 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0930 10:31:19.763137  576188 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0930 10:31:19.763167  576188 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0930 10:31:19.763175  576188 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0930 10:31:19.763182  576188 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0930 10:31:19.763188  576188 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0930 10:31:36.606388  576188 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0930 10:31:36.606431  576188 cache.go:194] Successfully downloaded all kic artifacts
	I0930 10:31:36.606473  576188 start.go:360] acquireMachinesLock for addons-718366: {Name:mkcc9f52048bcb539eb2c19ba8edac315f37b684 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 10:31:36.606610  576188 start.go:364] duration metric: took 113.425µs to acquireMachinesLock for "addons-718366"
	I0930 10:31:36.606640  576188 start.go:93] Provisioning new machine with config: &{Name:addons-718366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 10:31:36.606722  576188 start.go:125] createHost starting for "" (driver="docker")
	I0930 10:31:36.609505  576188 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0930 10:31:36.609800  576188 start.go:159] libmachine.API.Create for "addons-718366" (driver="docker")
	I0930 10:31:36.609842  576188 client.go:168] LocalClient.Create starting
	I0930 10:31:36.609960  576188 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem
	I0930 10:31:36.990982  576188 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/cert.pem
	I0930 10:31:37.632250  576188 cli_runner.go:164] Run: docker network inspect addons-718366 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0930 10:31:37.647997  576188 cli_runner.go:211] docker network inspect addons-718366 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0930 10:31:37.648087  576188 network_create.go:284] running [docker network inspect addons-718366] to gather additional debugging logs...
	I0930 10:31:37.648108  576188 cli_runner.go:164] Run: docker network inspect addons-718366
	W0930 10:31:37.666472  576188 cli_runner.go:211] docker network inspect addons-718366 returned with exit code 1
	I0930 10:31:37.666507  576188 network_create.go:287] error running [docker network inspect addons-718366]: docker network inspect addons-718366: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-718366 not found
	I0930 10:31:37.666521  576188 network_create.go:289] output of [docker network inspect addons-718366]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-718366 not found
	
	** /stderr **
	I0930 10:31:37.666652  576188 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 10:31:37.682855  576188 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017b0f20}
	I0930 10:31:37.682901  576188 network_create.go:124] attempt to create docker network addons-718366 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0930 10:31:37.682963  576188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-718366 addons-718366
	I0930 10:31:37.753006  576188 network_create.go:108] docker network addons-718366 192.168.49.0/24 created
	I0930 10:31:37.753040  576188 kic.go:121] calculated static IP "192.168.49.2" for the "addons-718366" container
	I0930 10:31:37.753117  576188 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0930 10:31:37.768087  576188 cli_runner.go:164] Run: docker volume create addons-718366 --label name.minikube.sigs.k8s.io=addons-718366 --label created_by.minikube.sigs.k8s.io=true
	I0930 10:31:37.784157  576188 oci.go:103] Successfully created a docker volume addons-718366
	I0930 10:31:37.784245  576188 cli_runner.go:164] Run: docker run --rm --name addons-718366-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718366 --entrypoint /usr/bin/test -v addons-718366:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0930 10:31:39.859396  576188 cli_runner.go:217] Completed: docker run --rm --name addons-718366-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718366 --entrypoint /usr/bin/test -v addons-718366:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (2.075110378s)
	I0930 10:31:39.859424  576188 oci.go:107] Successfully prepared a docker volume addons-718366
	I0930 10:31:39.859448  576188 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 10:31:39.859467  576188 kic.go:194] Starting extracting preloaded images to volume ...
	I0930 10:31:39.859530  576188 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-718366:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0930 10:31:43.835757  576188 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-718366:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (3.97617046s)
	I0930 10:31:43.835789  576188 kic.go:203] duration metric: took 3.976319306s to extract preloaded images to volume ...
	W0930 10:31:43.835943  576188 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0930 10:31:43.836061  576188 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0930 10:31:43.891196  576188 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-718366 --name addons-718366 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718366 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-718366 --network addons-718366 --ip 192.168.49.2 --volume addons-718366:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0930 10:31:44.248245  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Running}}
	I0930 10:31:44.274600  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:31:44.306529  576188 cli_runner.go:164] Run: docker exec addons-718366 stat /var/lib/dpkg/alternatives/iptables
	I0930 10:31:44.359444  576188 oci.go:144] the created container "addons-718366" has a running status.
	I0930 10:31:44.359471  576188 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa...
	I0930 10:31:44.997180  576188 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0930 10:31:45.033020  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:31:45.054795  576188 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0930 10:31:45.054823  576188 kic_runner.go:114] Args: [docker exec --privileged addons-718366 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0930 10:31:45.150433  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:31:45.178099  576188 machine.go:93] provisionDockerMachine start ...
	I0930 10:31:45.178219  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:45.203008  576188 main.go:141] libmachine: Using SSH client type: native
	I0930 10:31:45.203294  576188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 38988 <nil> <nil>}
	I0930 10:31:45.203305  576188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 10:31:45.341698  576188 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-718366
	
	I0930 10:31:45.341727  576188 ubuntu.go:169] provisioning hostname "addons-718366"
	I0930 10:31:45.341795  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:45.364079  576188 main.go:141] libmachine: Using SSH client type: native
	I0930 10:31:45.364321  576188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 38988 <nil> <nil>}
	I0930 10:31:45.364339  576188 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-718366 && echo "addons-718366" | sudo tee /etc/hostname
	I0930 10:31:45.513605  576188 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-718366
	
	I0930 10:31:45.513697  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:45.531270  576188 main.go:141] libmachine: Using SSH client type: native
	I0930 10:31:45.531519  576188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 38988 <nil> <nil>}
	I0930 10:31:45.531542  576188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-718366' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-718366/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-718366' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 10:31:45.657393  576188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 10:31:45.657421  576188 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19734-570035/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-570035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-570035/.minikube}
	I0930 10:31:45.657449  576188 ubuntu.go:177] setting up certificates
	I0930 10:31:45.657461  576188 provision.go:84] configureAuth start
	I0930 10:31:45.657532  576188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718366
	I0930 10:31:45.674066  576188 provision.go:143] copyHostCerts
	I0930 10:31:45.674149  576188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-570035/.minikube/ca.pem (1078 bytes)
	I0930 10:31:45.674271  576188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-570035/.minikube/cert.pem (1123 bytes)
	I0930 10:31:45.674342  576188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-570035/.minikube/key.pem (1679 bytes)
	I0930 10:31:45.674396  576188 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-570035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca-key.pem org=jenkins.addons-718366 san=[127.0.0.1 192.168.49.2 addons-718366 localhost minikube]
	I0930 10:31:45.981328  576188 provision.go:177] copyRemoteCerts
	I0930 10:31:45.981423  576188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 10:31:45.981472  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:45.997951  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.090693  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 10:31:46.116251  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 10:31:46.141025  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0930 10:31:46.166818  576188 provision.go:87] duration metric: took 509.328593ms to configureAuth
	I0930 10:31:46.166888  576188 ubuntu.go:193] setting minikube options for container-runtime
	I0930 10:31:46.167109  576188 config.go:182] Loaded profile config "addons-718366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:31:46.167220  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.183793  576188 main.go:141] libmachine: Using SSH client type: native
	I0930 10:31:46.184047  576188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 38988 <nil> <nil>}
	I0930 10:31:46.184069  576188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 10:31:46.414611  576188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 10:31:46.414638  576188 machine.go:96] duration metric: took 1.236519349s to provisionDockerMachine
	I0930 10:31:46.414654  576188 client.go:171] duration metric: took 9.804797803s to LocalClient.Create
	I0930 10:31:46.414708  576188 start.go:167] duration metric: took 9.804909414s to libmachine.API.Create "addons-718366"
	I0930 10:31:46.414724  576188 start.go:293] postStartSetup for "addons-718366" (driver="docker")
	I0930 10:31:46.414735  576188 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 10:31:46.414836  576188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 10:31:46.414922  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.432825  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.526839  576188 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 10:31:46.529986  576188 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0930 10:31:46.530020  576188 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0930 10:31:46.530031  576188 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0930 10:31:46.530038  576188 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0930 10:31:46.530053  576188 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-570035/.minikube/addons for local assets ...
	I0930 10:31:46.530129  576188 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-570035/.minikube/files for local assets ...
	I0930 10:31:46.530155  576188 start.go:296] duration metric: took 115.424998ms for postStartSetup
	I0930 10:31:46.530481  576188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718366
	I0930 10:31:46.546445  576188 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/config.json ...
	I0930 10:31:46.546743  576188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:31:46.546793  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.563100  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.658380  576188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0930 10:31:46.662859  576188 start.go:128] duration metric: took 10.056121452s to createHost
	I0930 10:31:46.662883  576188 start.go:83] releasing machines lock for "addons-718366", held for 10.056259303s
	I0930 10:31:46.662953  576188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718366
	I0930 10:31:46.679358  576188 ssh_runner.go:195] Run: cat /version.json
	I0930 10:31:46.679415  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.679741  576188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 10:31:46.679803  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:31:46.704694  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.707977  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:31:46.917060  576188 ssh_runner.go:195] Run: systemctl --version
	I0930 10:31:46.921195  576188 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 10:31:47.061112  576188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0930 10:31:47.065232  576188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 10:31:47.086297  576188 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0930 10:31:47.086388  576188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 10:31:47.121211  576188 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0930 10:31:47.121240  576188 start.go:495] detecting cgroup driver to use...
	I0930 10:31:47.121275  576188 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0930 10:31:47.121327  576188 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 10:31:47.138863  576188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 10:31:47.150816  576188 docker.go:217] disabling cri-docker service (if available) ...
	I0930 10:31:47.150879  576188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 10:31:47.165652  576188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 10:31:47.179926  576188 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 10:31:47.273399  576188 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 10:31:47.363581  576188 docker.go:233] disabling docker service ...
	I0930 10:31:47.363669  576188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 10:31:47.383649  576188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 10:31:47.396300  576188 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 10:31:47.479534  576188 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 10:31:47.578817  576188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 10:31:47.590693  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 10:31:47.606912  576188 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 10:31:47.606982  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.616770  576188 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 10:31:47.616838  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.626842  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.636932  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.646765  576188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 10:31:47.655795  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.665503  576188 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.681353  576188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:31:47.691540  576188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 10:31:47.700478  576188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 10:31:47.709442  576188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:31:47.791594  576188 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 10:31:47.910242  576188 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 10:31:47.910380  576188 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 10:31:47.913887  576188 start.go:563] Will wait 60s for crictl version
	I0930 10:31:47.913948  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:31:47.917201  576188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 10:31:47.956213  576188 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0930 10:31:47.956327  576188 ssh_runner.go:195] Run: crio --version
	I0930 10:31:47.995739  576188 ssh_runner.go:195] Run: crio --version
	I0930 10:31:48.038600  576188 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0930 10:31:48.040972  576188 cli_runner.go:164] Run: docker network inspect addons-718366 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 10:31:48.059448  576188 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0930 10:31:48.063378  576188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:31:48.074967  576188 kubeadm.go:883] updating cluster {Name:addons-718366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 10:31:48.075101  576188 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 10:31:48.075164  576188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 10:31:48.152821  576188 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 10:31:48.152846  576188 crio.go:433] Images already preloaded, skipping extraction
	I0930 10:31:48.152903  576188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 10:31:48.188287  576188 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 10:31:48.188312  576188 cache_images.go:84] Images are preloaded, skipping loading
	I0930 10:31:48.188323  576188 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0930 10:31:48.188415  576188 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-718366 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 10:31:48.188496  576188 ssh_runner.go:195] Run: crio config
	I0930 10:31:48.238352  576188 cni.go:84] Creating CNI manager for ""
	I0930 10:31:48.238376  576188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0930 10:31:48.238386  576188 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 10:31:48.238408  576188 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-718366 NodeName:addons-718366 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 10:31:48.238553  576188 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-718366"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 10:31:48.238630  576188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 10:31:48.247791  576188 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 10:31:48.247902  576188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 10:31:48.256589  576188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0930 10:31:48.274946  576188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 10:31:48.293776  576188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0930 10:31:48.312418  576188 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0930 10:31:48.315789  576188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:31:48.326439  576188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:31:48.407610  576188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:31:48.421862  576188 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366 for IP: 192.168.49.2
	I0930 10:31:48.421936  576188 certs.go:194] generating shared ca certs ...
	I0930 10:31:48.421965  576188 certs.go:226] acquiring lock for ca certs: {Name:mk1a6e0acac4c352dd045fb15e8f16e43e290be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:48.422139  576188 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-570035/.minikube/ca.key
	I0930 10:31:48.852559  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt ...
	I0930 10:31:48.852592  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt: {Name:mkf151645d175ccb0b3534f7f3a47f78c7b74bd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:48.852823  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/ca.key ...
	I0930 10:31:48.852839  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/ca.key: {Name:mk253c50c9e044c6b24426ba126fc768ae2c086d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:48.852936  576188 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.key
	I0930 10:31:49.127433  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.crt ...
	I0930 10:31:49.127472  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.crt: {Name:mk3c5c40e5e854bce5292f6c8b72b378b70a89ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:49.127671  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.key ...
	I0930 10:31:49.127693  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.key: {Name:mkccb69636b16c12bfb67aee8a9ccc8fbc4adc20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:49.127784  576188 certs.go:256] generating profile certs ...
	I0930 10:31:49.127846  576188 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.key
	I0930 10:31:49.127867  576188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt with IP's: []
	I0930 10:31:49.435254  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt ...
	I0930 10:31:49.435286  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: {Name:mkb5471f9020f84972ffa54ded95d7795d2a1016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:49.435477  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.key ...
	I0930 10:31:49.435489  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.key: {Name:mk3319c7a4b7aa7eacc7a275bdff66d1921999a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:49.435574  576188 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key.484276da
	I0930 10:31:49.435592  576188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt.484276da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0930 10:31:50.182674  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt.484276da ...
	I0930 10:31:50.182710  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt.484276da: {Name:mk6507e673c5274a73199d398bdbaf9b2d7b6554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:50.182907  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key.484276da ...
	I0930 10:31:50.182921  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key.484276da: {Name:mk737ffdf84242931763a97a2893d5f88d102eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:50.183007  576188 certs.go:381] copying /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt.484276da -> /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt
	I0930 10:31:50.183084  576188 certs.go:385] copying /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key.484276da -> /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key
	I0930 10:31:50.183135  576188 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.key
	I0930 10:31:50.183156  576188 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.crt with IP's: []
	I0930 10:31:50.657677  576188 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.crt ...
	I0930 10:31:50.657708  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.crt: {Name:mkddac17456589328bd0297cfc529913e40d6096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:50.657893  576188 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.key ...
	I0930 10:31:50.657907  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.key: {Name:mk1da3d7241ee96e850a287589cbd33941beaf05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:50.659767  576188 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 10:31:50.659810  576188 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/ca.pem (1078 bytes)
	I0930 10:31:50.659833  576188 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/cert.pem (1123 bytes)
	I0930 10:31:50.659862  576188 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-570035/.minikube/certs/key.pem (1679 bytes)
	I0930 10:31:50.660447  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 10:31:50.684494  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 10:31:50.708442  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 10:31:50.732440  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0930 10:31:50.756657  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0930 10:31:50.780179  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 10:31:50.804081  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 10:31:50.832833  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 10:31:50.870081  576188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 10:31:50.894487  576188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 10:31:50.911847  576188 ssh_runner.go:195] Run: openssl version
	I0930 10:31:50.917167  576188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 10:31:50.926449  576188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:31:50.929974  576188 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:31 /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:31:50.930037  576188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:31:50.936865  576188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 10:31:50.946146  576188 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 10:31:50.949263  576188 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 10:31:50.949326  576188 kubeadm.go:392] StartCluster: {Name:addons-718366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-718366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:31:50.949411  576188 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 10:31:50.949469  576188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 10:31:50.986405  576188 cri.go:89] found id: ""
	I0930 10:31:50.986521  576188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 10:31:50.995471  576188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 10:31:51.005070  576188 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0930 10:31:51.005164  576188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 10:31:51.014498  576188 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 10:31:51.014517  576188 kubeadm.go:157] found existing configuration files:
	
	I0930 10:31:51.014593  576188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 10:31:51.023579  576188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 10:31:51.023670  576188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 10:31:51.032109  576188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 10:31:51.040792  576188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 10:31:51.040883  576188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 10:31:51.049272  576188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 10:31:51.058271  576188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 10:31:51.058357  576188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 10:31:51.067199  576188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 10:31:51.075621  576188 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 10:31:51.075693  576188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 10:31:51.083850  576188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0930 10:31:51.127566  576188 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 10:31:51.127636  576188 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 10:31:51.147314  576188 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0930 10:31:51.147389  576188 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0930 10:31:51.147428  576188 kubeadm.go:310] OS: Linux
	I0930 10:31:51.147478  576188 kubeadm.go:310] CGROUPS_CPU: enabled
	I0930 10:31:51.147529  576188 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0930 10:31:51.147580  576188 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0930 10:31:51.147630  576188 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0930 10:31:51.147689  576188 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0930 10:31:51.147743  576188 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0930 10:31:51.147792  576188 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0930 10:31:51.147843  576188 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0930 10:31:51.147891  576188 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0930 10:31:51.211072  576188 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 10:31:51.211220  576188 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 10:31:51.211322  576188 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 10:31:51.217978  576188 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 10:31:51.222074  576188 out.go:235]   - Generating certificates and keys ...
	I0930 10:31:51.222200  576188 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 10:31:51.222290  576188 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 10:31:51.507541  576188 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 10:31:52.100429  576188 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 10:31:52.343512  576188 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 10:31:53.350821  576188 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 10:31:54.127332  576188 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 10:31:54.127730  576188 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-718366 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0930 10:31:55.090224  576188 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 10:31:55.090597  576188 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-718366 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0930 10:31:55.557333  576188 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 10:31:56.433561  576188 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 10:31:57.360076  576188 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 10:31:57.360372  576188 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 10:31:57.616865  576188 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 10:31:58.166068  576188 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 10:31:58.642711  576188 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 10:31:59.408755  576188 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 10:31:59.928063  576188 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 10:31:59.928676  576188 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 10:31:59.931546  576188 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 10:31:59.934534  576188 out.go:235]   - Booting up control plane ...
	I0930 10:31:59.934632  576188 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 10:31:59.934707  576188 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 10:31:59.934773  576188 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 10:31:59.943378  576188 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 10:31:59.949241  576188 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 10:31:59.949518  576188 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 10:32:00.105875  576188 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 10:32:00.106001  576188 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 10:32:01.107740  576188 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001980346s
	I0930 10:32:01.107838  576188 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 10:32:07.109888  576188 kubeadm.go:310] [api-check] The API server is healthy after 6.002182723s
	I0930 10:32:07.131339  576188 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 10:32:07.151401  576188 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 10:32:07.177130  576188 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 10:32:07.177349  576188 kubeadm.go:310] [mark-control-plane] Marking the node addons-718366 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 10:32:07.188510  576188 kubeadm.go:310] [bootstrap-token] Using token: 8aonc1.ekajo8hgoq6vth44
	I0930 10:32:07.193078  576188 out.go:235]   - Configuring RBAC rules ...
	I0930 10:32:07.193212  576188 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 10:32:07.195793  576188 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 10:32:07.203953  576188 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 10:32:07.207903  576188 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 10:32:07.211613  576188 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 10:32:07.218369  576188 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 10:32:07.519705  576188 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 10:32:07.953415  576188 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 10:32:08.516178  576188 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 10:32:08.517416  576188 kubeadm.go:310] 
	I0930 10:32:08.517508  576188 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 10:32:08.517531  576188 kubeadm.go:310] 
	I0930 10:32:08.517630  576188 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 10:32:08.517641  576188 kubeadm.go:310] 
	I0930 10:32:08.517681  576188 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 10:32:08.517745  576188 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 10:32:08.517806  576188 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 10:32:08.517818  576188 kubeadm.go:310] 
	I0930 10:32:08.517880  576188 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 10:32:08.517888  576188 kubeadm.go:310] 
	I0930 10:32:08.517935  576188 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 10:32:08.517940  576188 kubeadm.go:310] 
	I0930 10:32:08.517992  576188 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 10:32:08.518066  576188 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 10:32:08.518134  576188 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 10:32:08.518138  576188 kubeadm.go:310] 
	I0930 10:32:08.518221  576188 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 10:32:08.518298  576188 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 10:32:08.518302  576188 kubeadm.go:310] 
	I0930 10:32:08.518385  576188 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8aonc1.ekajo8hgoq6vth44 \
	I0930 10:32:08.518487  576188 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:34f1ba6de874bd896834dc114ac775d877f5b795b01506ad8bb22dc9b74f70da \
	I0930 10:32:08.518508  576188 kubeadm.go:310] 	--control-plane 
	I0930 10:32:08.518513  576188 kubeadm.go:310] 
	I0930 10:32:08.518603  576188 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 10:32:08.518608  576188 kubeadm.go:310] 
	I0930 10:32:08.518690  576188 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8aonc1.ekajo8hgoq6vth44 \
	I0930 10:32:08.518791  576188 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:34f1ba6de874bd896834dc114ac775d877f5b795b01506ad8bb22dc9b74f70da 
	I0930 10:32:08.522706  576188 kubeadm.go:310] W0930 10:31:51.124221    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:32:08.523011  576188 kubeadm.go:310] W0930 10:31:51.125105    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:32:08.523230  576188 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0930 10:32:08.523336  576188 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 10:32:08.523356  576188 cni.go:84] Creating CNI manager for ""
	I0930 10:32:08.523365  576188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0930 10:32:08.526350  576188 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0930 10:32:08.528840  576188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0930 10:32:08.532638  576188 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0930 10:32:08.532658  576188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0930 10:32:08.550943  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0930 10:32:08.822890  576188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 10:32:08.823054  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:08.823069  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-718366 minikube.k8s.io/updated_at=2024_09_30T10_32_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=addons-718366 minikube.k8s.io/primary=true
	I0930 10:32:08.983346  576188 ops.go:34] apiserver oom_adj: -16
	I0930 10:32:08.998983  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:09.500016  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:09.999359  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:10.499482  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:10.999362  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:11.499443  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:11.999113  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:12.500484  576188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:32:12.616697  576188 kubeadm.go:1113] duration metric: took 3.793709432s to wait for elevateKubeSystemPrivileges
	I0930 10:32:12.616732  576188 kubeadm.go:394] duration metric: took 21.667424713s to StartCluster
	I0930 10:32:12.616750  576188 settings.go:142] acquiring lock: {Name:mk11436cfb74a22d5df272d0ed716a2f4f11abe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:32:12.616873  576188 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-570035/kubeconfig
	I0930 10:32:12.617251  576188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/kubeconfig: {Name:mk2b4dce89b9a4c7357cab4707a99982ddc5b94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:32:12.617445  576188 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 10:32:12.617597  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 10:32:12.617836  576188 config.go:182] Loaded profile config "addons-718366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:32:12.617874  576188 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0930 10:32:12.617960  576188 addons.go:69] Setting yakd=true in profile "addons-718366"
	I0930 10:32:12.617979  576188 addons.go:234] Setting addon yakd=true in "addons-718366"
	I0930 10:32:12.618003  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.618496  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.618987  576188 addons.go:69] Setting inspektor-gadget=true in profile "addons-718366"
	I0930 10:32:12.619028  576188 addons.go:234] Setting addon inspektor-gadget=true in "addons-718366"
	I0930 10:32:12.619066  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.619563  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.619718  576188 addons.go:69] Setting metrics-server=true in profile "addons-718366"
	I0930 10:32:12.619732  576188 addons.go:234] Setting addon metrics-server=true in "addons-718366"
	I0930 10:32:12.619755  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.620173  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.620821  576188 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-718366"
	I0930 10:32:12.620870  576188 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-718366"
	I0930 10:32:12.620910  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.621401  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.627312  576188 addons.go:69] Setting registry=true in profile "addons-718366"
	I0930 10:32:12.627345  576188 addons.go:234] Setting addon registry=true in "addons-718366"
	I0930 10:32:12.627389  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.627879  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629127  576188 addons.go:69] Setting cloud-spanner=true in profile "addons-718366"
	I0930 10:32:12.629593  576188 addons.go:234] Setting addon cloud-spanner=true in "addons-718366"
	I0930 10:32:12.629630  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.630378  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629311  576188 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-718366"
	I0930 10:32:12.634602  576188 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-718366"
	I0930 10:32:12.634666  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.629324  576188 addons.go:69] Setting default-storageclass=true in profile "addons-718366"
	I0930 10:32:12.637049  576188 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-718366"
	I0930 10:32:12.637348  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.642682  576188 addons.go:69] Setting storage-provisioner=true in profile "addons-718366"
	I0930 10:32:12.642716  576188 addons.go:234] Setting addon storage-provisioner=true in "addons-718366"
	I0930 10:32:12.642757  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.643213  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629329  576188 addons.go:69] Setting gcp-auth=true in profile "addons-718366"
	I0930 10:32:12.652125  576188 mustload.go:65] Loading cluster: addons-718366
	I0930 10:32:12.652324  576188 config.go:182] Loaded profile config "addons-718366": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:32:12.652576  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.656063  576188 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-718366"
	I0930 10:32:12.656091  576188 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-718366"
	I0930 10:32:12.656420  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.685014  576188 addons.go:69] Setting volcano=true in profile "addons-718366"
	I0930 10:32:12.685050  576188 addons.go:234] Setting addon volcano=true in "addons-718366"
	I0930 10:32:12.685092  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.685608  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629332  576188 addons.go:69] Setting ingress=true in profile "addons-718366"
	I0930 10:32:12.687633  576188 addons.go:234] Setting addon ingress=true in "addons-718366"
	I0930 10:32:12.687681  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.688210  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.705646  576188 addons.go:69] Setting volumesnapshots=true in profile "addons-718366"
	I0930 10:32:12.705685  576188 addons.go:234] Setting addon volumesnapshots=true in "addons-718366"
	I0930 10:32:12.705724  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.706207  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629336  576188 addons.go:69] Setting ingress-dns=true in profile "addons-718366"
	I0930 10:32:12.708613  576188 addons.go:234] Setting addon ingress-dns=true in "addons-718366"
	I0930 10:32:12.708663  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.709150  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.629418  576188 out.go:177] * Verifying Kubernetes components...
	I0930 10:32:12.729494  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.825496  576188 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0930 10:32:12.832477  576188 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0930 10:32:12.835208  576188 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 10:32:12.835233  576188 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 10:32:12.835325  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.835432  576188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:32:12.853660  576188 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0930 10:32:12.855707  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.857751  576188 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0930 10:32:12.857864  576188 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 10:32:12.859599  576188 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:32:12.872767  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0930 10:32:12.872887  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.865884  576188 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0930 10:32:12.875361  576188 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0930 10:32:12.875445  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.866737  576188 addons.go:234] Setting addon default-storageclass=true in "addons-718366"
	I0930 10:32:12.882918  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:12.883376  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:12.887937  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0930 10:32:12.887958  576188 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0930 10:32:12.888030  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.894765  576188 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:32:12.894794  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 10:32:12.894866  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.872690  576188 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0930 10:32:12.908423  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0930 10:32:12.908785  576188 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0930 10:32:12.908833  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0930 10:32:12.908950  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.940598  576188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0930 10:32:12.941002  576188 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0930 10:32:12.946206  576188 out.go:177]   - Using image docker.io/registry:2.8.3
	I0930 10:32:12.948814  576188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:32:12.949045  576188 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0930 10:32:12.949077  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0930 10:32:12.949171  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.958006  576188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:32:12.959188  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0930 10:32:12.961743  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W0930 10:32:12.962757  576188 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0930 10:32:12.973838  576188 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:32:12.973872  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0930 10:32:12.973943  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.979512  576188 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0930 10:32:12.979700  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0930 10:32:12.985709  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0930 10:32:12.985933  576188 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:32:12.985946  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0930 10:32:12.986012  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:12.996257  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0930 10:32:12.996526  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.001310  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0930 10:32:13.001479  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0930 10:32:13.001508  576188 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0930 10:32:13.001634  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:13.009342  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 10:32:13.017322  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0930 10:32:13.020721  576188 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0930 10:32:13.021813  576188 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-718366"
	I0930 10:32:13.021852  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:13.022269  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:13.032608  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0930 10:32:13.032637  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0930 10:32:13.032715  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:13.058753  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.086640  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.090634  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.123015  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.154530  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.177875  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.178807  576188 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 10:32:13.178823  576188 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 10:32:13.178880  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:13.185183  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.204407  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.206891  576188 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0930 10:32:13.209370  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.213841  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.224068  576188 out.go:177]   - Using image docker.io/busybox:stable
	I0930 10:32:13.227725  576188 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:32:13.227749  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0930 10:32:13.227816  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:13.235510  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:13.260318  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	W0930 10:32:13.273338  576188 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0930 10:32:13.273406  576188 retry.go:31] will retry after 227.69102ms: ssh: handshake failed: EOF
	I0930 10:32:13.394925  576188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:32:13.486745  576188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 10:32:13.486818  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0930 10:32:13.623628  576188 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0930 10:32:13.623711  576188 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0930 10:32:13.630043  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:32:13.635130  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:32:13.638091  576188 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0930 10:32:13.638162  576188 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0930 10:32:13.659361  576188 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0930 10:32:13.659438  576188 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0930 10:32:13.671231  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:32:13.673254  576188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 10:32:13.673314  576188 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 10:32:13.699306  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0930 10:32:13.699326  576188 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0930 10:32:13.702344  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:32:13.749760  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0930 10:32:13.749837  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0930 10:32:13.762014  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0930 10:32:13.776095  576188 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0930 10:32:13.776167  576188 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0930 10:32:13.783348  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 10:32:13.795504  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0930 10:32:13.795584  576188 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0930 10:32:13.809799  576188 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0930 10:32:13.809876  576188 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0930 10:32:13.867266  576188 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:32:13.867337  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0930 10:32:13.895970  576188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:32:13.896050  576188 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 10:32:13.927958  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0930 10:32:13.928037  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0930 10:32:13.932218  576188 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0930 10:32:13.932292  576188 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0930 10:32:13.950651  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:32:13.969239  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0930 10:32:13.969315  576188 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0930 10:32:13.972998  576188 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0930 10:32:13.973069  576188 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0930 10:32:14.064724  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:32:14.068228  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0930 10:32:14.068306  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0930 10:32:14.084109  576188 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0930 10:32:14.084189  576188 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0930 10:32:14.101672  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:32:14.118680  576188 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:32:14.118751  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0930 10:32:14.128305  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0930 10:32:14.128380  576188 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0930 10:32:14.228099  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0930 10:32:14.228175  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0930 10:32:14.260067  576188 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0930 10:32:14.260147  576188 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0930 10:32:14.267263  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:32:14.286038  576188 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:32:14.286113  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0930 10:32:14.406085  576188 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0930 10:32:14.406166  576188 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0930 10:32:14.409527  576188 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0930 10:32:14.409623  576188 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0930 10:32:14.443415  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:32:14.478742  576188 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0930 10:32:14.478821  576188 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0930 10:32:14.482790  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0930 10:32:14.482880  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0930 10:32:14.522876  576188 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:32:14.522950  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0930 10:32:14.538265  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0930 10:32:14.538348  576188 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0930 10:32:14.599317  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:32:14.621923  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0930 10:32:14.621995  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0930 10:32:14.718338  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0930 10:32:14.718419  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0930 10:32:14.771911  576188 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:32:14.771992  576188 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0930 10:32:14.830453  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:32:16.302802  576188 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.293410506s)
	I0930 10:32:16.302886  576188 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0930 10:32:16.303052  576188 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.908050917s)
	I0930 10:32:16.303221  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.673105181s)
	I0930 10:32:16.304869  576188 node_ready.go:35] waiting up to 6m0s for node "addons-718366" to be "Ready" ...
	I0930 10:32:16.969956  576188 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-718366" context rescaled to 1 replicas
	I0930 10:32:17.813534  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.178328626s)
	I0930 10:32:17.813663  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.142359566s)
	I0930 10:32:18.331726  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:18.989036  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.2269377s)
	I0930 10:32:18.989135  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.205718923s)
	I0930 10:32:18.989300  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.038581281s)
	I0930 10:32:18.989533  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.924735717s)
	I0930 10:32:18.990072  576188 addons.go:475] Verifying addon registry=true in "addons-718366"
	I0930 10:32:18.989162  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.2865793s)
	I0930 10:32:18.989730  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.887983209s)
	I0930 10:32:18.990430  576188 addons.go:475] Verifying addon metrics-server=true in "addons-718366"
	I0930 10:32:18.989761  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.722429624s)
	I0930 10:32:18.990693  576188 addons.go:475] Verifying addon ingress=true in "addons-718366"
	I0930 10:32:18.989832  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.546341657s)
	W0930 10:32:18.991429  576188 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:32:18.991452  576188 retry.go:31] will retry after 214.891484ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:32:18.989886  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.390493939s)
	I0930 10:32:18.993976  576188 out.go:177] * Verifying ingress addon...
	I0930 10:32:18.993993  576188 out.go:177] * Verifying registry addon...
	I0930 10:32:18.994136  576188 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-718366 service yakd-dashboard -n yakd-dashboard
	
	I0930 10:32:18.998130  576188 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0930 10:32:19.000026  576188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0930 10:32:19.012749  576188 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 10:32:19.012827  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0930 10:32:19.013748  576188 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0930 10:32:19.015873  576188 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0930 10:32:19.015899  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:19.206505  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:32:19.222406  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.391854023s)
	I0930 10:32:19.222443  576188 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-718366"
	I0930 10:32:19.225269  576188 out.go:177] * Verifying csi-hostpath-driver addon...
	I0930 10:32:19.228851  576188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0930 10:32:19.265510  576188 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 10:32:19.265536  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:19.502396  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:19.510520  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:19.733138  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:20.002773  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:20.004965  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:20.233838  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:20.503847  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:20.505878  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:20.735188  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:20.808517  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:21.005465  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:21.006508  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:21.232962  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:21.508544  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:21.510168  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:21.746490  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:21.919471  576188 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0930 10:32:21.919583  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:21.945306  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:22.005204  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:22.020654  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:22.107096  576188 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0930 10:32:22.156422  576188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.949861964s)
	I0930 10:32:22.161917  576188 addons.go:234] Setting addon gcp-auth=true in "addons-718366"
	I0930 10:32:22.161972  576188 host.go:66] Checking if "addons-718366" exists ...
	I0930 10:32:22.162436  576188 cli_runner.go:164] Run: docker container inspect addons-718366 --format={{.State.Status}}
	I0930 10:32:22.180503  576188 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0930 10:32:22.180562  576188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718366
	I0930 10:32:22.199581  576188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38988 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/addons-718366/id_rsa Username:docker}
	I0930 10:32:22.234471  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:22.293532  576188 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:32:22.295855  576188 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0930 10:32:22.298481  576188 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0930 10:32:22.298507  576188 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0930 10:32:22.327120  576188 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0930 10:32:22.327146  576188 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0930 10:32:22.354965  576188 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:32:22.354989  576188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0930 10:32:22.374415  576188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:32:22.505237  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:22.505593  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:22.733404  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:22.810784  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:22.982766  576188 addons.go:475] Verifying addon gcp-auth=true in "addons-718366"
	I0930 10:32:22.985946  576188 out.go:177] * Verifying gcp-auth addon...
	I0930 10:32:22.989503  576188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0930 10:32:22.997921  576188 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 10:32:22.997948  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:23.007118  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:23.013282  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:23.232430  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:23.492864  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:23.502671  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:23.504311  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:23.732922  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:23.993049  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:24.002595  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:24.005381  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:24.232995  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:24.492978  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:24.502914  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:24.503966  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:24.733190  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:24.993358  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:25.002805  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:25.003600  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:25.232476  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:25.308811  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:25.492564  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:25.502308  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:25.504363  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:25.732965  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:25.993474  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:26.003592  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:26.005468  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:26.232578  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:26.493164  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:26.502818  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:26.504372  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:26.732670  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:26.993385  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:27.004214  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:27.004360  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:27.232999  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:27.493904  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:27.502518  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:27.504500  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:27.732700  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:27.809256  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:27.993469  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:28.002259  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:28.005142  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:28.232398  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:28.493035  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:28.502278  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:28.503849  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:28.732758  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:28.992992  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:29.003509  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:29.004188  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:29.232281  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:29.492609  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:29.501741  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:29.504027  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:29.732607  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:29.993719  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:30.005781  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:30.006305  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:30.232478  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:30.308805  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:30.493327  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:30.502458  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:30.504010  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:30.732161  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:30.993225  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:31.002921  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:31.004619  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:31.232186  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:31.492616  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:31.501951  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:31.503335  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:31.732881  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:31.993602  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:32.003681  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:32.004106  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:32.232590  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:32.308898  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:32.492382  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:32.502524  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:32.503242  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:32.732493  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:32.993359  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:33.003345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:33.004523  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:33.232210  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:33.492895  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:33.502809  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:33.503380  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:33.732345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:33.992694  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:34.002487  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:34.005419  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:34.232668  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:34.493362  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:34.502120  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:34.503290  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:34.732832  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:34.808872  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:34.993165  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:35.002532  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:35.003792  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:35.232243  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:35.492644  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:35.502151  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:35.504388  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:35.732397  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:35.993350  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:36.004449  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:36.006027  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:36.233129  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:36.493897  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:36.503054  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:36.503156  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:36.732619  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:36.993186  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:37.003617  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:37.004328  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:37.232382  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:37.309099  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:37.492995  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:37.502362  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:37.503981  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:37.732628  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:37.992500  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:38.006378  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:38.009415  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:38.232948  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:38.493574  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:38.501907  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:38.503340  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:38.732877  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:38.993074  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:39.002160  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:39.004134  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:39.232913  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:39.492334  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:39.502072  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:39.504609  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:39.733100  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:39.808384  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:39.992997  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:40.002119  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:40.012472  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:40.232629  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:40.492673  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:40.501888  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:40.503434  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:40.732929  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:40.992943  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:41.003060  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:41.004287  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:41.232552  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:41.493144  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:41.501724  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:41.504150  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:41.732666  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:41.808700  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:41.992905  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:42.002375  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:42.004751  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:42.232856  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:42.494375  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:42.502604  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:42.503446  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:42.732867  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:42.993326  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:43.002100  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:43.004307  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:43.232852  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:43.493140  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:43.501743  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:43.503151  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:43.733043  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:43.993474  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:44.003911  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:44.004199  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:44.232444  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:44.308664  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:44.492846  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:44.502736  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:44.503109  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:44.732682  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:44.992688  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:45.002473  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:45.006372  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:45.233808  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:45.493054  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:45.502649  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:45.504224  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:45.732634  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:45.992992  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:46.003067  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:46.005020  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:46.232318  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:46.308743  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:46.493311  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:46.501833  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:46.504311  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:46.732337  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:46.993446  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:47.002979  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:47.004213  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:47.231826  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:47.493043  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:47.502555  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:47.504579  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:47.733091  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:47.992702  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:48.006318  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:48.006591  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:48.232843  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:48.309156  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:48.492630  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:48.502793  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:48.505041  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:48.732633  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:48.993020  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:49.002803  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:49.005073  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:49.232599  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:49.493358  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:49.502132  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:49.504685  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:49.732732  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:49.993101  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:50.008747  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:50.011007  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:50.232033  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:50.492811  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:50.502024  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:50.503194  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:50.732565  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:50.808123  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:50.992880  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:51.002470  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:51.004489  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:51.232566  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:51.493283  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:51.503096  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:51.504579  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:51.732498  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:51.997038  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:52.003743  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:52.004664  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:52.232961  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:52.493233  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:52.502560  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:52.504146  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:52.732196  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:52.809117  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:52.993352  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:53.002467  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:53.005258  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:53.232118  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:53.492883  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:53.503298  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:53.503937  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:53.732561  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:53.992888  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:54.003179  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:54.003621  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:54.232000  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:54.493201  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:54.502407  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:54.504047  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:54.732754  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:54.809165  576188 node_ready.go:53] node "addons-718366" has status "Ready":"False"
	I0930 10:32:54.993439  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:55.003745  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:55.006573  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:55.232921  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:55.532690  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:55.536564  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:55.537614  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:55.747717  576188 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 10:32:55.747798  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:55.813584  576188 node_ready.go:49] node "addons-718366" has status "Ready":"True"
	I0930 10:32:55.813696  576188 node_ready.go:38] duration metric: took 39.508639259s for node "addons-718366" to be "Ready" ...
	I0930 10:32:55.813729  576188 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:32:55.842207  576188 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dtmzl" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:56.024341  576188 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 10:32:56.024415  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:56.026608  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:56.027649  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:56.238249  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:56.510908  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:56.599871  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:56.601369  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:56.734813  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:56.993968  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:57.004113  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:57.004475  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:57.234269  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:57.349188  576188 pod_ready.go:93] pod "coredns-7c65d6cfc9-dtmzl" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.349213  576188 pod_ready.go:82] duration metric: took 1.506927684s for pod "coredns-7c65d6cfc9-dtmzl" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.349264  576188 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.354868  576188 pod_ready.go:93] pod "etcd-addons-718366" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.354894  576188 pod_ready.go:82] duration metric: took 5.614429ms for pod "etcd-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.354911  576188 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.360427  576188 pod_ready.go:93] pod "kube-apiserver-addons-718366" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.360453  576188 pod_ready.go:82] duration metric: took 5.533545ms for pod "kube-apiserver-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.360465  576188 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.366443  576188 pod_ready.go:93] pod "kube-controller-manager-addons-718366" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.366468  576188 pod_ready.go:82] duration metric: took 5.995876ms for pod "kube-controller-manager-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.366481  576188 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6d7ts" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.409203  576188 pod_ready.go:93] pod "kube-proxy-6d7ts" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.409232  576188 pod_ready.go:82] duration metric: took 42.742719ms for pod "kube-proxy-6d7ts" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.409245  576188 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.494502  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:57.504034  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:57.504588  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:57.741490  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:57.809369  576188 pod_ready.go:93] pod "kube-scheduler-addons-718366" in "kube-system" namespace has status "Ready":"True"
	I0930 10:32:57.809395  576188 pod_ready.go:82] duration metric: took 400.142122ms for pod "kube-scheduler-addons-718366" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.809406  576188 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace to be "Ready" ...
	I0930 10:32:57.992791  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:58.002813  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:58.005194  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:58.235034  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:58.493263  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:58.505193  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:58.507236  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:58.735275  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:58.993601  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:59.003135  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:59.005872  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:59.234232  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:59.493712  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:32:59.505146  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:32:59.506583  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:32:59.734233  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:32:59.817196  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:32:59.996524  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:00.018042  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:00.019456  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:00.235319  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:00.493875  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:00.513018  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:00.515874  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:00.735209  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:00.993692  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:01.009352  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:01.011139  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:01.234558  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:01.493345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:01.502755  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:01.504885  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:01.734041  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:01.823332  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:01.993286  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:02.003595  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:02.005208  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:02.234246  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:02.494833  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:02.506503  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:02.507965  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:02.733979  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:02.994512  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:03.006008  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:03.008882  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:03.235987  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:03.502069  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:03.504611  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:03.508145  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:03.734477  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:03.993075  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:04.002465  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:04.005969  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:04.237150  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:04.318563  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:04.493450  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:04.503535  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:04.505295  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:04.735410  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:04.993251  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:05.004507  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:05.005793  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:05.233147  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:05.493785  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:05.503110  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:05.504756  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:05.734929  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:05.993818  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:06.005361  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:06.008120  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:06.234165  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:06.494029  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:06.506345  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:06.507733  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:06.736131  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:06.820180  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:06.997221  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:07.003917  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:07.012186  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:07.235277  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:07.494419  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:07.503987  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:07.506651  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:07.735614  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:07.993601  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:08.007216  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:08.008949  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:08.235108  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:08.492758  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:08.506875  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:08.509276  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:08.734821  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:08.996343  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:09.003494  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:09.018021  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:09.233920  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:09.322744  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:09.495622  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:09.503188  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:09.505370  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:09.733302  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:09.993442  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:10.007158  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:10.014910  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:10.236566  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:10.493122  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:10.506277  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:10.508170  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:10.734819  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:11.003392  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:11.017958  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:11.024396  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:11.241113  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:11.493717  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:11.503395  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:11.505398  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:11.734258  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:11.818701  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:11.993638  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:12.004028  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:12.005119  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:12.234546  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:12.493816  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:12.502382  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:12.504357  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:12.735120  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:12.993827  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:13.003086  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:13.005511  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:13.240764  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:13.493012  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:13.502733  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:13.504695  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:13.739103  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:13.992794  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:14.002410  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:14.004962  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:14.234182  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:14.315747  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:14.493894  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:14.502951  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:14.504325  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:14.735374  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:14.995201  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:15.008392  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:15.009511  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:15.239287  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:15.497798  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:15.505845  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:15.506265  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:15.733914  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:15.994121  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:16.002064  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:16.005323  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:16.235840  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:16.317348  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:16.493717  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:16.502559  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:16.504743  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:16.733456  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:16.993232  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:17.004117  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:17.005715  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:17.233977  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:17.493225  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:17.508853  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:17.509324  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:17.733379  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:17.993128  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:18.002969  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:18.004753  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:18.235053  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:18.318055  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:18.494182  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:18.514063  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:18.515256  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:18.741787  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:18.993437  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:19.006106  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:19.006941  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:19.238835  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:19.493578  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:19.503346  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:19.507520  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:19.735461  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:19.993675  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:20.007386  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:20.009120  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:20.234329  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:20.494059  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:20.503676  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:20.508870  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:20.734675  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:20.819054  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:20.994644  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:21.005532  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:21.006881  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:21.233747  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:21.493683  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:21.502510  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:21.505435  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:21.733595  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:21.993151  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:22.004128  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:22.007124  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:22.234355  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:22.494138  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:22.522806  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:22.523017  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:22.733192  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:22.993544  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:23.003301  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:23.005614  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:23.234009  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:23.316009  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:23.493223  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:23.502465  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:23.504091  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:23.734075  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:23.993191  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:24.005564  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:24.006266  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:24.237087  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:24.494192  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:24.509932  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:24.511585  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:24.736086  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:24.993584  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:25.002534  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:25.004467  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:25.238048  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:25.316968  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:25.493170  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:25.502257  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:25.504345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:25.735840  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:25.993750  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:26.014041  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:26.018512  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:26.234506  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:26.499206  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:26.522015  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:26.531645  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:26.734077  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:26.995142  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:27.002623  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:27.005131  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:27.234277  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:27.509630  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:27.517834  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:27.519610  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:27.734102  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:27.815917  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:27.993225  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:28.004799  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:28.007787  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:28.233431  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:28.495964  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:28.505908  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:28.507029  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:28.743601  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:28.994222  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:29.005072  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:29.005919  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:29.234475  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:29.493121  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:29.503087  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:29.505224  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:29.733867  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:29.818825  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:29.993832  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:30.003223  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:30.009270  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:30.234345  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:30.493573  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:30.503172  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:30.506658  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:30.734108  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:30.997885  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:31.003703  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:31.006228  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:31.234690  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:31.492946  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:31.504905  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:31.505338  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:31.734023  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:31.993444  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:32.005887  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:32.016752  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:32.234205  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:32.316627  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:32.493299  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:32.504102  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:32.512754  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:32.735003  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:32.994944  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:33.006628  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:33.007729  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:33.234441  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:33.493806  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:33.505141  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:33.507304  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:33.738773  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:33.993624  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:34.013205  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:34.017042  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:34.233853  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:34.316867  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:34.492641  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:34.502886  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:34.503705  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:34.734286  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:34.993856  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:35.002176  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:35.004584  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:35.233492  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:35.493057  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:35.502018  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:35.503973  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:35.734314  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:35.993679  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:36.002264  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:36.008072  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:36.233857  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:36.492965  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:36.502535  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:36.504461  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:36.735017  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:36.816831  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:36.996016  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:37.008288  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:37.015405  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:37.234294  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:37.497062  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:37.504363  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:37.504553  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:37.735672  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:37.992884  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:38.005378  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:38.007796  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:38.237325  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:38.493907  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:38.505124  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:38.505765  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:33:38.734820  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:38.818257  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:38.994462  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:39.004598  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:39.014134  576188 kapi.go:107] duration metric: took 1m20.014106342s to wait for kubernetes.io/minikube-addons=registry ...
	I0930 10:33:39.235130  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:39.494071  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:39.503484  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:39.734794  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:39.999698  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:40.010425  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:40.242604  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:40.499596  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:40.503174  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:40.735274  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:40.993423  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:41.003329  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:41.236791  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:41.316472  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:41.494610  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:41.503610  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:41.734043  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:41.994292  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:42.002568  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:42.235021  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:42.493143  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:42.502820  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:42.733736  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:42.993069  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:43.003100  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:43.234277  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:43.317480  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:43.493236  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:43.502436  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:43.734921  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:43.992811  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:44.003086  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:44.233865  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:44.493110  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:44.502615  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:44.733541  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:44.993633  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:45.003852  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:45.234843  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:45.493514  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:45.502782  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:45.733458  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:45.817273  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:45.993706  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:46.016026  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:46.233913  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:46.498463  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:46.502757  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:46.734490  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:46.993029  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:47.004462  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:47.235637  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:47.503521  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:47.504652  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:47.741358  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:47.993918  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:48.006378  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:48.234693  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:48.315817  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:48.493248  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:33:48.502422  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:48.740592  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:48.993401  576188 kapi.go:107] duration metric: took 1m26.003896883s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0930 10:33:48.996461  576188 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-718366 cluster.
	I0930 10:33:48.999075  576188 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0930 10:33:49.002456  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:49.005169  576188 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0930 10:33:49.235396  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:49.503984  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:49.734511  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:50.004782  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:50.235070  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:50.323313  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:50.503830  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:50.734604  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:51.003831  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:51.234289  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:51.503943  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:51.733769  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:52.002609  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:52.234340  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:52.507200  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:52.734763  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:52.818591  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:53.004428  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:53.235787  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:53.502862  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:53.734437  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:54.007069  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:54.235077  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:54.503292  576188 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:33:54.735359  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:55.002193  576188 kapi.go:107] duration metric: took 1m36.004059929s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0930 10:33:55.234033  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:55.317516  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:55.734069  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:56.234143  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:56.734127  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:57.233654  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:57.738983  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:57.816482  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:33:58.234471  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:58.734677  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:59.238020  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:59.734710  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:33:59.817182  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:00.236578  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:00.734525  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:01.234627  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:01.734546  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:01.825323  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:02.233540  576188 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:34:02.738772  576188 kapi.go:107] duration metric: took 1m43.50991885s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0930 10:34:02.744119  576188 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0930 10:34:02.746979  576188 addons.go:510] duration metric: took 1m50.129091289s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0930 10:34:04.316300  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:06.815648  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:09.315052  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:11.315816  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:13.316065  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:15.316190  576188 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"False"
	I0930 10:34:16.315831  576188 pod_ready.go:93] pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace has status "Ready":"True"
	I0930 10:34:16.315861  576188 pod_ready.go:82] duration metric: took 1m18.506446968s for pod "metrics-server-84c5f94fbc-jqf86" in "kube-system" namespace to be "Ready" ...
	I0930 10:34:16.315874  576188 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4vhfz" in "kube-system" namespace to be "Ready" ...
	I0930 10:34:16.321502  576188 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4vhfz" in "kube-system" namespace has status "Ready":"True"
	I0930 10:34:16.321532  576188 pod_ready.go:82] duration metric: took 5.649022ms for pod "nvidia-device-plugin-daemonset-4vhfz" in "kube-system" namespace to be "Ready" ...
	I0930 10:34:16.321583  576188 pod_ready.go:39] duration metric: took 1m20.507828006s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:34:16.321605  576188 api_server.go:52] waiting for apiserver process to appear ...
	I0930 10:34:16.321638  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 10:34:16.321706  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 10:34:16.386809  576188 cri.go:89] found id: "162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:16.386886  576188 cri.go:89] found id: ""
	I0930 10:34:16.386900  576188 logs.go:276] 1 containers: [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b]
	I0930 10:34:16.386984  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.391025  576188 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 10:34:16.391106  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 10:34:16.435062  576188 cri.go:89] found id: "c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:16.435085  576188 cri.go:89] found id: ""
	I0930 10:34:16.435094  576188 logs.go:276] 1 containers: [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70]
	I0930 10:34:16.435153  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.438701  576188 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 10:34:16.438773  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 10:34:16.478714  576188 cri.go:89] found id: "8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:16.478737  576188 cri.go:89] found id: ""
	I0930 10:34:16.478746  576188 logs.go:276] 1 containers: [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e]
	I0930 10:34:16.478802  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.482397  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 10:34:16.482471  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 10:34:16.537909  576188 cri.go:89] found id: "f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:16.537932  576188 cri.go:89] found id: ""
	I0930 10:34:16.537940  576188 logs.go:276] 1 containers: [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf]
	I0930 10:34:16.538010  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.541631  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 10:34:16.541707  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 10:34:16.584294  576188 cri.go:89] found id: "d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:16.584316  576188 cri.go:89] found id: ""
	I0930 10:34:16.584324  576188 logs.go:276] 1 containers: [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6]
	I0930 10:34:16.584387  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.588121  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 10:34:16.588197  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 10:34:16.627920  576188 cri.go:89] found id: "8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:16.627943  576188 cri.go:89] found id: ""
	I0930 10:34:16.627951  576188 logs.go:276] 1 containers: [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce]
	I0930 10:34:16.628010  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.631831  576188 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 10:34:16.631910  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 10:34:16.670917  576188 cri.go:89] found id: "97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:16.670987  576188 cri.go:89] found id: ""
	I0930 10:34:16.671002  576188 logs.go:276] 1 containers: [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c]
	I0930 10:34:16.671067  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:16.674818  576188 logs.go:123] Gathering logs for dmesg ...
	I0930 10:34:16.674843  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 10:34:16.691258  576188 logs.go:123] Gathering logs for etcd [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70] ...
	I0930 10:34:16.691286  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:16.781066  576188 logs.go:123] Gathering logs for kube-proxy [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6] ...
	I0930 10:34:16.781106  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:16.824438  576188 logs.go:123] Gathering logs for kindnet [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c] ...
	I0930 10:34:16.824473  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:16.883060  576188 logs.go:123] Gathering logs for CRI-O ...
	I0930 10:34:16.883091  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 10:34:16.989887  576188 logs.go:123] Gathering logs for kubelet ...
	I0930 10:34:16.989925  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0930 10:34:17.064721  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541514    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.064968  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541583    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.065190  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541651    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.065432  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.065664  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.065898  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.067781  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.067995  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:17.104140  576188 logs.go:123] Gathering logs for describe nodes ...
	I0930 10:34:17.104180  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 10:34:17.291559  576188 logs.go:123] Gathering logs for kube-apiserver [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b] ...
	I0930 10:34:17.291591  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:17.344411  576188 logs.go:123] Gathering logs for coredns [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e] ...
	I0930 10:34:17.344446  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:17.394328  576188 logs.go:123] Gathering logs for kube-scheduler [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf] ...
	I0930 10:34:17.394358  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:17.437492  576188 logs.go:123] Gathering logs for kube-controller-manager [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce] ...
	I0930 10:34:17.437522  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:17.506642  576188 logs.go:123] Gathering logs for container status ...
	I0930 10:34:17.506679  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 10:34:17.557358  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:17.557386  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0930 10:34:17.557577  576188 out.go:270] X Problems detected in kubelet:
	W0930 10:34:17.557600  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.557623  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.557643  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:17.557652  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:17.557663  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:17.557670  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:17.557678  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:34:27.559396  576188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:34:27.573481  576188 api_server.go:72] duration metric: took 2m14.955998532s to wait for apiserver process to appear ...
	I0930 10:34:27.573512  576188 api_server.go:88] waiting for apiserver healthz status ...
	I0930 10:34:27.573570  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 10:34:27.573627  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 10:34:27.612157  576188 cri.go:89] found id: "162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:27.612193  576188 cri.go:89] found id: ""
	I0930 10:34:27.612201  576188 logs.go:276] 1 containers: [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b]
	I0930 10:34:27.612290  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.615922  576188 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 10:34:27.615995  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 10:34:27.657373  576188 cri.go:89] found id: "c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:27.657395  576188 cri.go:89] found id: ""
	I0930 10:34:27.657413  576188 logs.go:276] 1 containers: [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70]
	I0930 10:34:27.657473  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.661114  576188 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 10:34:27.661186  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 10:34:27.699276  576188 cri.go:89] found id: "8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:27.699300  576188 cri.go:89] found id: ""
	I0930 10:34:27.699309  576188 logs.go:276] 1 containers: [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e]
	I0930 10:34:27.699385  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.703275  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 10:34:27.703356  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 10:34:27.743333  576188 cri.go:89] found id: "f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:27.743353  576188 cri.go:89] found id: ""
	I0930 10:34:27.743361  576188 logs.go:276] 1 containers: [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf]
	I0930 10:34:27.743432  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.746997  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 10:34:27.747079  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 10:34:27.787583  576188 cri.go:89] found id: "d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:27.787605  576188 cri.go:89] found id: ""
	I0930 10:34:27.787613  576188 logs.go:276] 1 containers: [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6]
	I0930 10:34:27.787691  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.791098  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 10:34:27.791173  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 10:34:27.850541  576188 cri.go:89] found id: "8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:27.850563  576188 cri.go:89] found id: ""
	I0930 10:34:27.850575  576188 logs.go:276] 1 containers: [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce]
	I0930 10:34:27.850631  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.854249  576188 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 10:34:27.854319  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 10:34:27.893234  576188 cri.go:89] found id: "97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:27.893260  576188 cri.go:89] found id: ""
	I0930 10:34:27.893268  576188 logs.go:276] 1 containers: [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c]
	I0930 10:34:27.893322  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:27.897133  576188 logs.go:123] Gathering logs for container status ...
	I0930 10:34:27.897160  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 10:34:27.951284  576188 logs.go:123] Gathering logs for coredns [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e] ...
	I0930 10:34:27.951319  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:28.003152  576188 logs.go:123] Gathering logs for kube-scheduler [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf] ...
	I0930 10:34:28.003184  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:28.043478  576188 logs.go:123] Gathering logs for kube-controller-manager [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce] ...
	I0930 10:34:28.043557  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:28.115108  576188 logs.go:123] Gathering logs for kindnet [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c] ...
	I0930 10:34:28.115147  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:28.159435  576188 logs.go:123] Gathering logs for CRI-O ...
	I0930 10:34:28.159461  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 10:34:28.258636  576188 logs.go:123] Gathering logs for kube-proxy [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6] ...
	I0930 10:34:28.258677  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:28.302989  576188 logs.go:123] Gathering logs for kubelet ...
	I0930 10:34:28.303015  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0930 10:34:28.370971  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541514    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.371245  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541583    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.371445  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541651    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.371681  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.371871  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.372100  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.373981  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.374197  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:28.410471  576188 logs.go:123] Gathering logs for dmesg ...
	I0930 10:34:28.410499  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 10:34:28.427272  576188 logs.go:123] Gathering logs for describe nodes ...
	I0930 10:34:28.427345  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 10:34:28.564680  576188 logs.go:123] Gathering logs for kube-apiserver [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b] ...
	I0930 10:34:28.564708  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:28.622261  576188 logs.go:123] Gathering logs for etcd [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70] ...
	I0930 10:34:28.622295  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:28.714780  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:28.714813  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0930 10:34:28.714867  576188 out.go:270] X Problems detected in kubelet:
	W0930 10:34:28.714881  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.714889  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.714916  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:28.714924  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:28.714934  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:28.714940  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:28.714947  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:34:38.716957  576188 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0930 10:34:38.725719  576188 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0930 10:34:38.726755  576188 api_server.go:141] control plane version: v1.31.1
	I0930 10:34:38.726784  576188 api_server.go:131] duration metric: took 11.153263628s to wait for apiserver health ...
	I0930 10:34:38.726809  576188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 10:34:38.726837  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 10:34:38.726904  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 10:34:38.773675  576188 cri.go:89] found id: "162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:38.773695  576188 cri.go:89] found id: ""
	I0930 10:34:38.773703  576188 logs.go:276] 1 containers: [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b]
	I0930 10:34:38.773769  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.777305  576188 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 10:34:38.777389  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 10:34:38.819225  576188 cri.go:89] found id: "c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:38.819245  576188 cri.go:89] found id: ""
	I0930 10:34:38.819254  576188 logs.go:276] 1 containers: [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70]
	I0930 10:34:38.819313  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.823902  576188 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 10:34:38.823980  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 10:34:38.865257  576188 cri.go:89] found id: "8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:38.865278  576188 cri.go:89] found id: ""
	I0930 10:34:38.865301  576188 logs.go:276] 1 containers: [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e]
	I0930 10:34:38.865358  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.869041  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 10:34:38.869123  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 10:34:38.909299  576188 cri.go:89] found id: "f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:38.909323  576188 cri.go:89] found id: ""
	I0930 10:34:38.909331  576188 logs.go:276] 1 containers: [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf]
	I0930 10:34:38.909388  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.912958  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 10:34:38.913039  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 10:34:38.951466  576188 cri.go:89] found id: "d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:38.951489  576188 cri.go:89] found id: ""
	I0930 10:34:38.951497  576188 logs.go:276] 1 containers: [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6]
	I0930 10:34:38.951555  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:38.955148  576188 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 10:34:38.955250  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 10:34:38.999433  576188 cri.go:89] found id: "8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:38.999506  576188 cri.go:89] found id: ""
	I0930 10:34:38.999523  576188 logs.go:276] 1 containers: [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce]
	I0930 10:34:38.999588  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:39.003640  576188 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 10:34:39.003758  576188 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 10:34:39.042975  576188 cri.go:89] found id: "97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:39.043045  576188 cri.go:89] found id: ""
	I0930 10:34:39.043060  576188 logs.go:276] 1 containers: [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c]
	I0930 10:34:39.043118  576188 ssh_runner.go:195] Run: which crictl
	I0930 10:34:39.046722  576188 logs.go:123] Gathering logs for kube-controller-manager [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce] ...
	I0930 10:34:39.046747  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce"
	I0930 10:34:39.115864  576188 logs.go:123] Gathering logs for kubelet ...
	I0930 10:34:39.115902  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0930 10:34:39.186356  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541514    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.186605  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541583    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.186799  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.541651    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.187028  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.187213  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.187443  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.189229  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.189452  576188 logs.go:138] Found kubelet problem: Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:39.226877  576188 logs.go:123] Gathering logs for dmesg ...
	I0930 10:34:39.226918  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 10:34:39.244214  576188 logs.go:123] Gathering logs for etcd [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70] ...
	I0930 10:34:39.244244  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70"
	I0930 10:34:39.303635  576188 logs.go:123] Gathering logs for kube-scheduler [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf] ...
	I0930 10:34:39.303672  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf"
	I0930 10:34:39.346611  576188 logs.go:123] Gathering logs for kube-proxy [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6] ...
	I0930 10:34:39.346643  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6"
	I0930 10:34:39.385388  576188 logs.go:123] Gathering logs for kindnet [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c] ...
	I0930 10:34:39.385425  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c"
	I0930 10:34:39.449017  576188 logs.go:123] Gathering logs for CRI-O ...
	I0930 10:34:39.449056  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0930 10:34:39.546447  576188 logs.go:123] Gathering logs for container status ...
	I0930 10:34:39.546489  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 10:34:39.596349  576188 logs.go:123] Gathering logs for describe nodes ...
	I0930 10:34:39.596379  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 10:34:39.729086  576188 logs.go:123] Gathering logs for kube-apiserver [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b] ...
	I0930 10:34:39.729117  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b"
	I0930 10:34:39.806140  576188 logs.go:123] Gathering logs for coredns [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e] ...
	I0930 10:34:39.806172  576188 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e"
	I0930 10:34:39.854236  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:39.854262  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0930 10:34:39.854352  576188 out.go:270] X Problems detected in kubelet:
	W0930 10:34:39.854377  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.541667    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.854389  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.578908    1518 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-718366" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.854401  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.578958    1518 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	W0930 10:34:39.854407  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: W0930 10:32:55.619529    1518 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-718366" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-718366' and this object
	W0930 10:34:39.854414  576188 out.go:270]   Sep 30 10:32:55 addons-718366 kubelet[1518]: E0930 10:32:55.619583    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-718366\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-718366' and this object" logger="UnhandledError"
	I0930 10:34:39.854426  576188 out.go:358] Setting ErrFile to fd 2...
	I0930 10:34:39.854432  576188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:34:49.868953  576188 system_pods.go:59] 18 kube-system pods found
	I0930 10:34:49.868992  576188 system_pods.go:61] "coredns-7c65d6cfc9-dtmzl" [7a2c2f43-a853-49df-b58f-e6f6141a2737] Running
	I0930 10:34:49.869001  576188 system_pods.go:61] "csi-hostpath-attacher-0" [66af66d7-7f8b-4650-a4b6-9b5162ab76a1] Running
	I0930 10:34:49.869006  576188 system_pods.go:61] "csi-hostpath-resizer-0" [2d3f5d2f-f058-4a6b-b1c3-f25d3d549257] Running
	I0930 10:34:49.869012  576188 system_pods.go:61] "csi-hostpathplugin-mzdc5" [1f8386ff-e365-4d77-85c4-4380cc952f88] Running
	I0930 10:34:49.869048  576188 system_pods.go:61] "etcd-addons-718366" [41eb7870-f127-4cfa-8bb3-b32081bec033] Running
	I0930 10:34:49.869053  576188 system_pods.go:61] "kindnet-cx2x5" [cc2b53ef-4eba-4f69-a5e3-d3b1b8aee067] Running
	I0930 10:34:49.869062  576188 system_pods.go:61] "kube-apiserver-addons-718366" [d591a564-dc70-47d3-9e30-ac55eb92f702] Running
	I0930 10:34:49.869066  576188 system_pods.go:61] "kube-controller-manager-addons-718366" [566dbcee-1187-41f2-aaf4-b462be8fedc8] Running
	I0930 10:34:49.869079  576188 system_pods.go:61] "kube-ingress-dns-minikube" [201cdd5a-777d-406a-a3c3-ae55dfa26b03] Running
	I0930 10:34:49.869083  576188 system_pods.go:61] "kube-proxy-6d7ts" [1c00ed0e-dc57-4a81-b778-b92a64f0e0c1] Running
	I0930 10:34:49.869087  576188 system_pods.go:61] "kube-scheduler-addons-718366" [2159256d-1219-4d6d-9ec4-10a229c89118] Running
	I0930 10:34:49.869092  576188 system_pods.go:61] "metrics-server-84c5f94fbc-jqf86" [37c7c588-691f-43b1-bc7e-d9d29b8c740e] Running
	I0930 10:34:49.869130  576188 system_pods.go:61] "nvidia-device-plugin-daemonset-4vhfz" [409875b6-caeb-49b0-a6a3-4adab5c26abf] Running
	I0930 10:34:49.869143  576188 system_pods.go:61] "registry-66c9cd494c-zx9j9" [a2779ea5-90ce-41c6-800a-4fd0e62455e1] Running
	I0930 10:34:49.869147  576188 system_pods.go:61] "registry-proxy-nxhd5" [78962db4-c230-431b-b141-405fd6389146] Running
	I0930 10:34:49.869151  576188 system_pods.go:61] "snapshot-controller-56fcc65765-fnzp5" [00072f66-80b8-45a4-b940-6db1fba0c14b] Running
	I0930 10:34:49.869156  576188 system_pods.go:61] "snapshot-controller-56fcc65765-rtd66" [e61b1df1-f9b8-4ed6-b8bb-30c16e9e1a30] Running
	I0930 10:34:49.869160  576188 system_pods.go:61] "storage-provisioner" [fcd0fbac-220e-4dd5-a1a6-3ecae26b1962] Running
	I0930 10:34:49.869169  576188 system_pods.go:74] duration metric: took 11.14235034s to wait for pod list to return data ...
	I0930 10:34:49.869180  576188 default_sa.go:34] waiting for default service account to be created ...
	I0930 10:34:49.872043  576188 default_sa.go:45] found service account: "default"
	I0930 10:34:49.872072  576188 default_sa.go:55] duration metric: took 2.885942ms for default service account to be created ...
	I0930 10:34:49.872082  576188 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 10:34:49.882723  576188 system_pods.go:86] 18 kube-system pods found
	I0930 10:34:49.882762  576188 system_pods.go:89] "coredns-7c65d6cfc9-dtmzl" [7a2c2f43-a853-49df-b58f-e6f6141a2737] Running
	I0930 10:34:49.882770  576188 system_pods.go:89] "csi-hostpath-attacher-0" [66af66d7-7f8b-4650-a4b6-9b5162ab76a1] Running
	I0930 10:34:49.882794  576188 system_pods.go:89] "csi-hostpath-resizer-0" [2d3f5d2f-f058-4a6b-b1c3-f25d3d549257] Running
	I0930 10:34:49.882803  576188 system_pods.go:89] "csi-hostpathplugin-mzdc5" [1f8386ff-e365-4d77-85c4-4380cc952f88] Running
	I0930 10:34:49.882815  576188 system_pods.go:89] "etcd-addons-718366" [41eb7870-f127-4cfa-8bb3-b32081bec033] Running
	I0930 10:34:49.882820  576188 system_pods.go:89] "kindnet-cx2x5" [cc2b53ef-4eba-4f69-a5e3-d3b1b8aee067] Running
	I0930 10:34:49.882825  576188 system_pods.go:89] "kube-apiserver-addons-718366" [d591a564-dc70-47d3-9e30-ac55eb92f702] Running
	I0930 10:34:49.882835  576188 system_pods.go:89] "kube-controller-manager-addons-718366" [566dbcee-1187-41f2-aaf4-b462be8fedc8] Running
	I0930 10:34:49.882840  576188 system_pods.go:89] "kube-ingress-dns-minikube" [201cdd5a-777d-406a-a3c3-ae55dfa26b03] Running
	I0930 10:34:49.882845  576188 system_pods.go:89] "kube-proxy-6d7ts" [1c00ed0e-dc57-4a81-b778-b92a64f0e0c1] Running
	I0930 10:34:49.882855  576188 system_pods.go:89] "kube-scheduler-addons-718366" [2159256d-1219-4d6d-9ec4-10a229c89118] Running
	I0930 10:34:49.882859  576188 system_pods.go:89] "metrics-server-84c5f94fbc-jqf86" [37c7c588-691f-43b1-bc7e-d9d29b8c740e] Running
	I0930 10:34:49.882882  576188 system_pods.go:89] "nvidia-device-plugin-daemonset-4vhfz" [409875b6-caeb-49b0-a6a3-4adab5c26abf] Running
	I0930 10:34:49.882887  576188 system_pods.go:89] "registry-66c9cd494c-zx9j9" [a2779ea5-90ce-41c6-800a-4fd0e62455e1] Running
	I0930 10:34:49.882891  576188 system_pods.go:89] "registry-proxy-nxhd5" [78962db4-c230-431b-b141-405fd6389146] Running
	I0930 10:34:49.882913  576188 system_pods.go:89] "snapshot-controller-56fcc65765-fnzp5" [00072f66-80b8-45a4-b940-6db1fba0c14b] Running
	I0930 10:34:49.882918  576188 system_pods.go:89] "snapshot-controller-56fcc65765-rtd66" [e61b1df1-f9b8-4ed6-b8bb-30c16e9e1a30] Running
	I0930 10:34:49.882927  576188 system_pods.go:89] "storage-provisioner" [fcd0fbac-220e-4dd5-a1a6-3ecae26b1962] Running
	I0930 10:34:49.882936  576188 system_pods.go:126] duration metric: took 10.846857ms to wait for k8s-apps to be running ...
	I0930 10:34:49.882947  576188 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 10:34:49.883021  576188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:34:49.895565  576188 system_svc.go:56] duration metric: took 12.60696ms WaitForService to wait for kubelet
	I0930 10:34:49.895595  576188 kubeadm.go:582] duration metric: took 2m37.278117729s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:34:49.895621  576188 node_conditions.go:102] verifying NodePressure condition ...
	I0930 10:34:49.898702  576188 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0930 10:34:49.898735  576188 node_conditions.go:123] node cpu capacity is 2
	I0930 10:34:49.898746  576188 node_conditions.go:105] duration metric: took 3.119274ms to run NodePressure ...
	I0930 10:34:49.898785  576188 start.go:241] waiting for startup goroutines ...
	I0930 10:34:49.898799  576188 start.go:246] waiting for cluster config update ...
	I0930 10:34:49.898824  576188 start.go:255] writing updated cluster config ...
	I0930 10:34:49.899193  576188 ssh_runner.go:195] Run: rm -f paused
	I0930 10:34:50.233812  576188 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 10:34:50.238556  576188 out.go:177] * Done! kubectl is now configured to use "addons-718366" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 10:48:09 addons-718366 crio[968]: time="2024-09-30 10:48:09.142730907Z" level=info msg="Stopped pod sandbox: f2b1f198facf7ab8f7ebb90d983be9c379b0a83e7ac66402454c1cc3649d3a7d" id=a47d6164-e7a5-4788-91fc-e462b3f0ea09 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 30 10:48:09 addons-718366 crio[968]: time="2024-09-30 10:48:09.733031179Z" level=info msg="Removing container: 7f69273237e01e4b32a7abf35102ea7436bd25e241e1ce484814d273248b6897" id=6db5b8c5-075d-4353-8632-cc3a7158d730 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 30 10:48:09 addons-718366 crio[968]: time="2024-09-30 10:48:09.752686831Z" level=info msg="Removed container 7f69273237e01e4b32a7abf35102ea7436bd25e241e1ce484814d273248b6897: default/cloud-spanner-emulator-5b584cc74-jgnx2/cloud-spanner-emulator" id=6db5b8c5-075d-4353-8632-cc3a7158d730 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 30 10:48:17 addons-718366 crio[968]: time="2024-09-30 10:48:17.857621610Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=da63cd1b-f0e2-4c1b-b654-118384b1f381 name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:48:17 addons-718366 crio[968]: time="2024-09-30 10:48:17.857859028Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=da63cd1b-f0e2-4c1b-b654-118384b1f381 name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:48:31 addons-718366 crio[968]: time="2024-09-30 10:48:31.857696881Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cd97bd63-fa54-4fc8-8c31-5a9051581679 name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:48:31 addons-718366 crio[968]: time="2024-09-30 10:48:31.857933635Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cd97bd63-fa54-4fc8-8c31-5a9051581679 name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:48:42 addons-718366 crio[968]: time="2024-09-30 10:48:42.857159863Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e085e64c-e38a-4602-82d3-7c9ac02a9d68 name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:48:42 addons-718366 crio[968]: time="2024-09-30 10:48:42.857391857Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e085e64c-e38a-4602-82d3-7c9ac02a9d68 name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:48:53 addons-718366 crio[968]: time="2024-09-30 10:48:53.857222366Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e36f81b1-5bca-4c5f-b7b8-c3e862b64728 name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:48:53 addons-718366 crio[968]: time="2024-09-30 10:48:53.857493612Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e36f81b1-5bca-4c5f-b7b8-c3e862b64728 name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:49:08 addons-718366 crio[968]: time="2024-09-30 10:49:08.476626585Z" level=info msg="Stopping pod sandbox: f2b1f198facf7ab8f7ebb90d983be9c379b0a83e7ac66402454c1cc3649d3a7d" id=6bec4e7b-22d3-4fd2-b606-a7f05501e202 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 30 10:49:08 addons-718366 crio[968]: time="2024-09-30 10:49:08.476674404Z" level=info msg="Stopped pod sandbox (already stopped): f2b1f198facf7ab8f7ebb90d983be9c379b0a83e7ac66402454c1cc3649d3a7d" id=6bec4e7b-22d3-4fd2-b606-a7f05501e202 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 30 10:49:08 addons-718366 crio[968]: time="2024-09-30 10:49:08.477386854Z" level=info msg="Removing pod sandbox: f2b1f198facf7ab8f7ebb90d983be9c379b0a83e7ac66402454c1cc3649d3a7d" id=9b8cc019-716f-41b5-8e56-ee7be07c48de name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 30 10:49:08 addons-718366 crio[968]: time="2024-09-30 10:49:08.487252678Z" level=info msg="Removed pod sandbox: f2b1f198facf7ab8f7ebb90d983be9c379b0a83e7ac66402454c1cc3649d3a7d" id=9b8cc019-716f-41b5-8e56-ee7be07c48de name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 30 10:49:08 addons-718366 crio[968]: time="2024-09-30 10:49:08.857201712Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4980f01f-993e-418d-9146-3c03937e9801 name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:49:08 addons-718366 crio[968]: time="2024-09-30 10:49:08.857437341Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4980f01f-993e-418d-9146-3c03937e9801 name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:49:21 addons-718366 crio[968]: time="2024-09-30 10:49:21.259473841Z" level=info msg="Stopping container: 7f25834811580b759896d89685bed64b97ae0e127521b742547b949d00ebe5c9 (timeout: 30s)" id=1ee43506-36dd-4f75-b23b-c77fb1602af5 name=/runtime.v1.RuntimeService/StopContainer
	Sep 30 10:49:21 addons-718366 crio[968]: time="2024-09-30 10:49:21.858123033Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=14f03fc5-2ef2-4619-a2a1-38de1653ff6f name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:49:21 addons-718366 crio[968]: time="2024-09-30 10:49:21.858362133Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=14f03fc5-2ef2-4619-a2a1-38de1653ff6f name=/runtime.v1.ImageService/ImageStatus
	Sep 30 10:49:22 addons-718366 crio[968]: time="2024-09-30 10:49:22.437686750Z" level=info msg="Stopped container 7f25834811580b759896d89685bed64b97ae0e127521b742547b949d00ebe5c9: kube-system/metrics-server-84c5f94fbc-jqf86/metrics-server" id=1ee43506-36dd-4f75-b23b-c77fb1602af5 name=/runtime.v1.RuntimeService/StopContainer
	Sep 30 10:49:22 addons-718366 crio[968]: time="2024-09-30 10:49:22.438582335Z" level=info msg="Stopping pod sandbox: 3898176a51cc931fc1d070d7549137bbdf627a2517d673972343c545960944fd" id=5292d110-cabe-4883-9b32-cac60d58e401 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 30 10:49:22 addons-718366 crio[968]: time="2024-09-30 10:49:22.438812565Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-jqf86 Namespace:kube-system ID:3898176a51cc931fc1d070d7549137bbdf627a2517d673972343c545960944fd UID:37c7c588-691f-43b1-bc7e-d9d29b8c740e NetNS:/var/run/netns/140fe935-4155-4d96-9171-f559268031c7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 30 10:49:22 addons-718366 crio[968]: time="2024-09-30 10:49:22.438959638Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-jqf86 from CNI network \"kindnet\" (type=ptp)"
	Sep 30 10:49:22 addons-718366 crio[968]: time="2024-09-30 10:49:22.480616127Z" level=info msg="Stopped pod sandbox: 3898176a51cc931fc1d070d7549137bbdf627a2517d673972343c545960944fd" id=5292d110-cabe-4883-9b32-cac60d58e401 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	34a349e023590       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   146f54b359ff0       hello-world-app-55bf9c44b4-tqpln
	890353217af2d       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                         5 minutes ago       Running             nginx                     0                   3cdfaa97914c9       nginx
	f7e963bb19262       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            15 minutes ago      Running             gcp-auth                  0                   c248cf7b4c141       gcp-auth-89d5ffd79-4zcrm
	7f25834811580       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   16 minutes ago      Exited              metrics-server            0                   3898176a51cc9       metrics-server-84c5f94fbc-jqf86
	8564280a03e37       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        16 minutes ago      Running             storage-provisioner       0                   8ea3828da6af2       storage-provisioner
	8970b526b14d3       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        16 minutes ago      Running             coredns                   0                   fc8f26b163074       coredns-7c65d6cfc9-dtmzl
	97d43354c9c18       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                        17 minutes ago      Running             kindnet-cni               0                   b504155dced4a       kindnet-cx2x5
	d94629297e53a       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        17 minutes ago      Running             kube-proxy                0                   42061a7582848       kube-proxy-6d7ts
	f46dcd2ffd212       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        17 minutes ago      Running             kube-scheduler            0                   4328be32dbdda       kube-scheduler-addons-718366
	8427a90f7890f       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        17 minutes ago      Running             kube-controller-manager   0                   61fcf6c446cf3       kube-controller-manager-addons-718366
	162d3240be19c       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        17 minutes ago      Running             kube-apiserver            0                   c729a09320dc3       kube-apiserver-addons-718366
	c0e6564b9b165       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        17 minutes ago      Running             etcd                      0                   e280627a20055       etcd-addons-718366
	
	
	==> coredns [8970b526b14d3c4394e9d0d4b159b9a7035baa34d15ed557bdbc267e1dadde9e] <==
	[INFO] 10.244.0.16:51134 - 3862 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00009631s
	[INFO] 10.244.0.16:51134 - 28713 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002595396s
	[INFO] 10.244.0.16:51134 - 38278 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002469697s
	[INFO] 10.244.0.16:51134 - 60275 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000123813s
	[INFO] 10.244.0.16:51134 - 20504 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000921133s
	[INFO] 10.244.0.16:42758 - 57933 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000109519s
	[INFO] 10.244.0.16:42758 - 58163 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000222452s
	[INFO] 10.244.0.16:46219 - 49034 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000046514s
	[INFO] 10.244.0.16:46219 - 48861 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090599s
	[INFO] 10.244.0.16:38841 - 23335 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049238s
	[INFO] 10.244.0.16:38841 - 23162 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000120719s
	[INFO] 10.244.0.16:36810 - 56287 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001370566s
	[INFO] 10.244.0.16:36810 - 56459 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001599623s
	[INFO] 10.244.0.16:53737 - 45454 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071399s
	[INFO] 10.244.0.16:53737 - 45306 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067083s
	[INFO] 10.244.0.19:60043 - 33793 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000164099s
	[INFO] 10.244.0.19:45569 - 24882 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00008945s
	[INFO] 10.244.0.19:37408 - 63394 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120645s
	[INFO] 10.244.0.19:32799 - 53535 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000581761s
	[INFO] 10.244.0.19:55061 - 24202 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127028s
	[INFO] 10.244.0.19:52877 - 28567 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000066952s
	[INFO] 10.244.0.19:41260 - 35512 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00224238s
	[INFO] 10.244.0.19:50161 - 49874 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002246064s
	[INFO] 10.244.0.19:55943 - 60460 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001310399s
	[INFO] 10.244.0.19:51706 - 64277 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001688868s
	
	
	==> describe nodes <==
	Name:               addons-718366
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-718366
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=addons-718366
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T10_32_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-718366
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 10:32:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-718366
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 10:49:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 10:47:47 +0000   Mon, 30 Sep 2024 10:32:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 10:47:47 +0000   Mon, 30 Sep 2024 10:32:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 10:47:47 +0000   Mon, 30 Sep 2024 10:32:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 10:47:47 +0000   Mon, 30 Sep 2024 10:32:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-718366
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 77d89a06001e4e62a34a490bff9aa946
	  System UUID:                905a5f23-cdd8-48a6-a301-0dc3d894de03
	  Boot ID:                    cd5783c9-92b8-4cba-8495-065a6f022f89
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-tqpln         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  gcp-auth                    gcp-auth-89d5ffd79-4zcrm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-7c65d6cfc9-dtmzl                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-addons-718366                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-cx2x5                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-addons-718366             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-718366    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-6d7ts                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-718366             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node addons-718366 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node addons-718366 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node addons-718366 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m                kubelet          Node addons-718366 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                kubelet          Node addons-718366 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                kubelet          Node addons-718366 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                node-controller  Node addons-718366 event: Registered Node addons-718366 in Controller
	  Normal   NodeReady                16m                kubelet          Node addons-718366 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep30 09:39] IPVS: rr: TCP 192.168.49.254:8443 - no destination available
	[Sep30 10:06] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [c0e6564b9b165c62647ee862259e1cfe711e4417abb4213fa4f4cffbcb5e0e70] <==
	{"level":"warn","ts":"2024-09-30T10:32:15.975672Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.962851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T10:32:15.975744Z","caller":"traceutil/trace.go:171","msg":"trace[1497020414] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:392; }","duration":"514.06067ms","start":"2024-09-30T10:32:15.461669Z","end":"2024-09-30T10:32:15.975729Z","steps":["trace[1497020414] 'agreement among raft nodes before linearized reading'  (duration: 340.620576ms)","trace[1497020414] 'range keys from in-memory index tree'  (duration: 173.328327ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:15.975777Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T10:32:15.461648Z","time spent":"514.122937ms","remote":"127.0.0.1:47722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":0,"response size":29,"request content":"key:\"/registry/deployments/default/cloud-spanner-emulator\" "}
	{"level":"warn","ts":"2024-09-30T10:32:15.976129Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"543.814465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-cx2x5\" ","response":"range_response_count:1 size:5102"}
	{"level":"info","ts":"2024-09-30T10:32:15.976175Z","caller":"traceutil/trace.go:171","msg":"trace[164340211] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-cx2x5; range_end:; response_count:1; response_revision:392; }","duration":"543.863112ms","start":"2024-09-30T10:32:15.432303Z","end":"2024-09-30T10:32:15.976166Z","steps":["trace[164340211] 'agreement among raft nodes before linearized reading'  (duration: 369.990963ms)","trace[164340211] 'range keys from in-memory index tree'  (duration: 173.801529ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:15.976202Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-30T10:32:15.432284Z","time spent":"543.912866ms","remote":"127.0.0.1:47438","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":5126,"request content":"key:\"/registry/pods/kube-system/kindnet-cx2x5\" "}
	{"level":"warn","ts":"2024-09-30T10:32:15.993928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.619432ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032241754536841 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-6d7ts.17f9ff067a572d5a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-6d7ts.17f9ff067a572d5a\" value_size:634 lease:8128032241754536491 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-30T10:32:16.005403Z","caller":"traceutil/trace.go:171","msg":"trace[221753668] linearizableReadLoop","detail":"{readStateIndex:402; appliedIndex:401; }","duration":"151.390807ms","start":"2024-09-30T10:32:15.853990Z","end":"2024-09-30T10:32:16.005381Z","steps":["trace[221753668] 'read index received'  (duration: 34.289µs)","trace[221753668] 'applied index is now lower than readState.Index'  (duration: 151.353163ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T10:32:16.025685Z","caller":"traceutil/trace.go:171","msg":"trace[1564697964] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"248.129407ms","start":"2024-09-30T10:32:15.777531Z","end":"2024-09-30T10:32:16.025660Z","steps":["trace[1564697964] 'process raft request'  (duration: 44.642585ms)","trace[1564697964] 'compare'  (duration: 50.754322ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:16.045362Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.356368ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T10:32:16.056252Z","caller":"traceutil/trace.go:171","msg":"trace[667812772] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:393; }","duration":"202.239182ms","start":"2024-09-30T10:32:15.853984Z","end":"2024-09-30T10:32:16.056223Z","steps":["trace[667812772] 'agreement among raft nodes before linearized reading'  (duration: 191.330883ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.205755Z","caller":"traceutil/trace.go:171","msg":"trace[685107055] linearizableReadLoop","detail":"{readStateIndex:407; appliedIndex:402; }","duration":"111.750218ms","start":"2024-09-30T10:32:16.093990Z","end":"2024-09-30T10:32:16.205740Z","steps":["trace[685107055] 'read index received'  (duration: 110.159088ms)","trace[685107055] 'applied index is now lower than readState.Index'  (duration: 1.59063ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-30T10:32:16.205871Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.850417ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-30T10:32:16.205893Z","caller":"traceutil/trace.go:171","msg":"trace[1709110772] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:398; }","duration":"111.899655ms","start":"2024-09-30T10:32:16.093987Z","end":"2024-09-30T10:32:16.205887Z","steps":["trace[1709110772] 'agreement among raft nodes before linearized reading'  (duration: 111.814619ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.206103Z","caller":"traceutil/trace.go:171","msg":"trace[320854091] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"112.610299ms","start":"2024-09-30T10:32:16.093485Z","end":"2024-09-30T10:32:16.206096Z","steps":["trace[320854091] 'process raft request'  (duration: 112.105081ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.206227Z","caller":"traceutil/trace.go:171","msg":"trace[273332653] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"112.693685ms","start":"2024-09-30T10:32:16.093527Z","end":"2024-09-30T10:32:16.206221Z","steps":["trace[273332653] 'process raft request'  (duration: 112.132609ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.206323Z","caller":"traceutil/trace.go:171","msg":"trace[2053403231] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"112.580695ms","start":"2024-09-30T10:32:16.093736Z","end":"2024-09-30T10:32:16.206316Z","steps":["trace[2053403231] 'process raft request'  (duration: 111.949688ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.206367Z","caller":"traceutil/trace.go:171","msg":"trace[754986013] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"112.543945ms","start":"2024-09-30T10:32:16.093817Z","end":"2024-09-30T10:32:16.206361Z","steps":["trace[754986013] 'process raft request'  (duration: 111.897775ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:32:16.214557Z","caller":"traceutil/trace.go:171","msg":"trace[1253211950] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"121.318358ms","start":"2024-09-30T10:32:16.093218Z","end":"2024-09-30T10:32:16.214537Z","steps":["trace[1253211950] 'process raft request'  (duration: 110.795453ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:42:02.621285Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1521}
	{"level":"info","ts":"2024-09-30T10:42:02.650830Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1521,"took":"28.997753ms","hash":1350178088,"current-db-size-bytes":6029312,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3149824,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2024-09-30T10:42:02.650884Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1350178088,"revision":1521,"compact-revision":-1}
	{"level":"info","ts":"2024-09-30T10:47:02.627183Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1935}
	{"level":"info","ts":"2024-09-30T10:47:02.645237Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1935,"took":"17.514837ms","hash":3001284146,"current-db-size-bytes":6029312,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":4386816,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-09-30T10:47:02.645293Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3001284146,"revision":1935,"compact-revision":1521}
	
	
	==> gcp-auth [f7e963bb19262a09a16f93679318ed840adcaedcbd6e1425db0d952add42b6b6] <==
	2024/09/30 10:34:50 Ready to write response ...
	2024/09/30 10:34:50 Ready to marshal response ...
	2024/09/30 10:34:50 Ready to write response ...
	2024/09/30 10:42:54 Ready to marshal response ...
	2024/09/30 10:42:54 Ready to write response ...
	2024/09/30 10:42:54 Ready to marshal response ...
	2024/09/30 10:42:54 Ready to write response ...
	2024/09/30 10:42:54 Ready to marshal response ...
	2024/09/30 10:42:54 Ready to write response ...
	2024/09/30 10:43:04 Ready to marshal response ...
	2024/09/30 10:43:04 Ready to write response ...
	2024/09/30 10:43:15 Ready to marshal response ...
	2024/09/30 10:43:15 Ready to write response ...
	2024/09/30 10:43:36 Ready to marshal response ...
	2024/09/30 10:43:36 Ready to write response ...
	2024/09/30 10:44:20 Ready to marshal response ...
	2024/09/30 10:44:20 Ready to write response ...
	2024/09/30 10:46:38 Ready to marshal response ...
	2024/09/30 10:46:38 Ready to write response ...
	2024/09/30 10:47:09 Ready to marshal response ...
	2024/09/30 10:47:09 Ready to write response ...
	2024/09/30 10:47:09 Ready to marshal response ...
	2024/09/30 10:47:09 Ready to write response ...
	2024/09/30 10:47:19 Ready to marshal response ...
	2024/09/30 10:47:19 Ready to write response ...
	
	
	==> kernel <==
	 10:49:22 up 1 day, 10:31,  0 users,  load average: 0.27, 0.45, 1.19
	Linux addons-718366 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [97d43354c9c180d2a6948f9bae33d0050b9e7c1ccad5f71e8e0f8c0adddbcb0c] <==
	I0930 10:47:15.387539       1 main.go:299] handling current node
	I0930 10:47:25.387167       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:47:25.387204       1 main.go:299] handling current node
	I0930 10:47:35.395114       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:47:35.395153       1 main.go:299] handling current node
	I0930 10:47:45.387374       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:47:45.387498       1 main.go:299] handling current node
	I0930 10:47:55.389640       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:47:55.389693       1 main.go:299] handling current node
	I0930 10:48:05.386907       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:48:05.386945       1 main.go:299] handling current node
	I0930 10:48:15.387228       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:48:15.387264       1 main.go:299] handling current node
	I0930 10:48:25.386781       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:48:25.386851       1 main.go:299] handling current node
	I0930 10:48:35.392254       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:48:35.392291       1 main.go:299] handling current node
	I0930 10:48:45.387632       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:48:45.387754       1 main.go:299] handling current node
	I0930 10:48:55.389727       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:48:55.389884       1 main.go:299] handling current node
	I0930 10:49:05.393492       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:49:05.393525       1 main.go:299] handling current node
	I0930 10:49:15.386766       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0930 10:49:15.386803       1 main.go:299] handling current node
	
	
	==> kube-apiserver [162d3240be19c1b5a7fe2f693e5599acffa610bbaa11a23c93677892f02ac33b] <==
	I0930 10:42:54.890805       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.140.135"}
	I0930 10:43:26.331792       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0930 10:43:44.099917       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0930 10:43:51.726819       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.727276       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:43:51.753391       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.753536       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:43:51.783338       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.783491       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:43:51.824695       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.824740       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:43:51.863148       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:43:51.863276       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0930 10:43:52.825369       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0930 10:43:52.863903       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0930 10:43:52.910514       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0930 10:44:14.305406       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0930 10:44:15.424396       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0930 10:44:19.885804       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0930 10:44:20.216244       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.7.114"}
	I0930 10:46:39.146523       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.76.220"}
	E0930 10:47:20.093607       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 10:47:20.103863       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 10:47:20.114576       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0930 10:47:35.115379       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [8427a90f7890fd0bfdf4d17d3e80bad623f91151bb5b94493daca4fbf40a7dce] <==
	E0930 10:47:24.432592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:47:39.496417       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:47:39.496460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:47:47.109964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-718366"
	W0930 10:47:49.928731       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:47:49.928775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:48:02.113698       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:48:02.113745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:48:07.403640       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0930 10:48:08.964231       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5b584cc74" duration="7.106µs"
	W0930 10:48:10.424301       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:48:10.424342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:48:11.769262       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:48:11.769306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:48:43.338971       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:48:43.339015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:48:50.256472       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:48:50.256512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:49:00.661975       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:49:00.662015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:49:10.884877       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:49:10.884920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:49:21.232203       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="8.82µs"
	W0930 10:49:21.758998       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:49:21.759141       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [d94629297e53a3eb11434de11d1008f3173ecef93fdaacbe1fc803c4229fb5f6] <==
	I0930 10:32:17.422602       1 server_linux.go:66] "Using iptables proxy"
	I0930 10:32:18.022042       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0930 10:32:18.046247       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 10:32:18.422271       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0930 10:32:18.422424       1 server_linux.go:169] "Using iptables Proxier"
	I0930 10:32:18.435427       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 10:32:18.436261       1 server.go:483] "Version info" version="v1.31.1"
	I0930 10:32:18.436290       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:32:18.439117       1 config.go:199] "Starting service config controller"
	I0930 10:32:18.439168       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 10:32:18.439259       1 config.go:105] "Starting endpoint slice config controller"
	I0930 10:32:18.439272       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 10:32:18.439784       1 config.go:328] "Starting node config controller"
	I0930 10:32:18.439802       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 10:32:18.539827       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 10:32:18.539944       1 shared_informer.go:320] Caches are synced for service config
	I0930 10:32:18.539974       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f46dcd2ffd212c9a12751876395d1bcb3f5cf38bd25c903d3bcb0e3420b222cf] <==
	I0930 10:32:05.100973       1 serving.go:386] Generated self-signed cert in-memory
	W0930 10:32:06.502704       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 10:32:06.502825       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 10:32:06.502860       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 10:32:06.502922       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 10:32:06.525286       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 10:32:06.527188       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:32:06.529906       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 10:32:06.530137       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 10:32:06.530163       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 10:32:06.530376       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0930 10:32:06.535468       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 10:32:06.535769       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0930 10:32:07.631231       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 10:48:31 addons-718366 kubelet[1518]: E0930 10:48:31.858385    1518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="df0189ce-cfa6-4fcb-9cb0-001e99817661"
	Sep 30 10:48:38 addons-718366 kubelet[1518]: E0930 10:48:38.209196    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693318208958958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 10:48:38 addons-718366 kubelet[1518]: E0930 10:48:38.209234    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693318208958958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 10:48:42 addons-718366 kubelet[1518]: E0930 10:48:42.857843    1518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="df0189ce-cfa6-4fcb-9cb0-001e99817661"
	Sep 30 10:48:48 addons-718366 kubelet[1518]: E0930 10:48:48.212247    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693328212011970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 10:48:48 addons-718366 kubelet[1518]: E0930 10:48:48.212285    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693328212011970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 10:48:53 addons-718366 kubelet[1518]: E0930 10:48:53.857766    1518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="df0189ce-cfa6-4fcb-9cb0-001e99817661"
	Sep 30 10:48:58 addons-718366 kubelet[1518]: E0930 10:48:58.215359    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693338215102759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 10:48:58 addons-718366 kubelet[1518]: E0930 10:48:58.215396    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693338215102759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 10:49:08 addons-718366 kubelet[1518]: E0930 10:49:08.221752    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693348217890292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 10:49:08 addons-718366 kubelet[1518]: E0930 10:49:08.221794    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693348217890292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 10:49:08 addons-718366 kubelet[1518]: E0930 10:49:08.857693    1518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="df0189ce-cfa6-4fcb-9cb0-001e99817661"
	Sep 30 10:49:18 addons-718366 kubelet[1518]: E0930 10:49:18.224950    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693358224703895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 10:49:18 addons-718366 kubelet[1518]: E0930 10:49:18.224989    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727693358224703895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 10:49:21 addons-718366 kubelet[1518]: E0930 10:49:21.858708    1518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="df0189ce-cfa6-4fcb-9cb0-001e99817661"
	Sep 30 10:49:22 addons-718366 kubelet[1518]: I0930 10:49:22.555167    1518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxz9w\" (UniqueName: \"kubernetes.io/projected/37c7c588-691f-43b1-bc7e-d9d29b8c740e-kube-api-access-gxz9w\") pod \"37c7c588-691f-43b1-bc7e-d9d29b8c740e\" (UID: \"37c7c588-691f-43b1-bc7e-d9d29b8c740e\") "
	Sep 30 10:49:22 addons-718366 kubelet[1518]: I0930 10:49:22.555215    1518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/37c7c588-691f-43b1-bc7e-d9d29b8c740e-tmp-dir\") pod \"37c7c588-691f-43b1-bc7e-d9d29b8c740e\" (UID: \"37c7c588-691f-43b1-bc7e-d9d29b8c740e\") "
	Sep 30 10:49:22 addons-718366 kubelet[1518]: I0930 10:49:22.555559    1518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c7c588-691f-43b1-bc7e-d9d29b8c740e-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "37c7c588-691f-43b1-bc7e-d9d29b8c740e" (UID: "37c7c588-691f-43b1-bc7e-d9d29b8c740e"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 30 10:49:22 addons-718366 kubelet[1518]: I0930 10:49:22.568554    1518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c7c588-691f-43b1-bc7e-d9d29b8c740e-kube-api-access-gxz9w" (OuterVolumeSpecName: "kube-api-access-gxz9w") pod "37c7c588-691f-43b1-bc7e-d9d29b8c740e" (UID: "37c7c588-691f-43b1-bc7e-d9d29b8c740e"). InnerVolumeSpecName "kube-api-access-gxz9w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:49:22 addons-718366 kubelet[1518]: I0930 10:49:22.656080    1518 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gxz9w\" (UniqueName: \"kubernetes.io/projected/37c7c588-691f-43b1-bc7e-d9d29b8c740e-kube-api-access-gxz9w\") on node \"addons-718366\" DevicePath \"\""
	Sep 30 10:49:22 addons-718366 kubelet[1518]: I0930 10:49:22.656118    1518 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/37c7c588-691f-43b1-bc7e-d9d29b8c740e-tmp-dir\") on node \"addons-718366\" DevicePath \"\""
	Sep 30 10:49:22 addons-718366 kubelet[1518]: I0930 10:49:22.869242    1518 scope.go:117] "RemoveContainer" containerID="7f25834811580b759896d89685bed64b97ae0e127521b742547b949d00ebe5c9"
	Sep 30 10:49:22 addons-718366 kubelet[1518]: I0930 10:49:22.906613    1518 scope.go:117] "RemoveContainer" containerID="7f25834811580b759896d89685bed64b97ae0e127521b742547b949d00ebe5c9"
	Sep 30 10:49:22 addons-718366 kubelet[1518]: E0930 10:49:22.907049    1518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f25834811580b759896d89685bed64b97ae0e127521b742547b949d00ebe5c9\": container with ID starting with 7f25834811580b759896d89685bed64b97ae0e127521b742547b949d00ebe5c9 not found: ID does not exist" containerID="7f25834811580b759896d89685bed64b97ae0e127521b742547b949d00ebe5c9"
	Sep 30 10:49:22 addons-718366 kubelet[1518]: I0930 10:49:22.907088    1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f25834811580b759896d89685bed64b97ae0e127521b742547b949d00ebe5c9"} err="failed to get container status \"7f25834811580b759896d89685bed64b97ae0e127521b742547b949d00ebe5c9\": rpc error: code = NotFound desc = could not find container \"7f25834811580b759896d89685bed64b97ae0e127521b742547b949d00ebe5c9\": container with ID starting with 7f25834811580b759896d89685bed64b97ae0e127521b742547b949d00ebe5c9 not found: ID does not exist"
	
	
	==> storage-provisioner [8564280a03e37716b0a9e9a9f7d87bbde241c67a46dcec2bb762772d073dec52] <==
	I0930 10:32:56.545004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 10:32:56.563543       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 10:32:56.563600       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 10:32:56.576241       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 10:32:56.576984       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-718366_ea9e4a9f-f89a-497b-a662-d047c0307409!
	I0930 10:32:56.576481       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a56189b9-c62a-4b37-a064-2fefbb3251ee", APIVersion:"v1", ResourceVersion:"910", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-718366_ea9e4a9f-f89a-497b-a662-d047c0307409 became leader
	I0930 10:32:56.677903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-718366_ea9e4a9f-f89a-497b-a662-d047c0307409!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-718366 -n addons-718366
helpers_test.go:261: (dbg) Run:  kubectl --context addons-718366 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-718366 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-718366 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-718366/192.168.49.2
	Start Time:       Mon, 30 Sep 2024 10:34:50 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q78z7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-q78z7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/busybox to addons-718366
	  Normal   Pulling    13m (x4 over 14m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     13m (x4 over 14m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     13m (x4 over 14m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m25s (x44 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (331.88s)

                                                
                                    

Test pass (295/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.54
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 4.66
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.19
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 210.71
31 TestAddons/serial/GCPAuth/Namespaces 0.22
35 TestAddons/parallel/InspektorGadget 11.76
38 TestAddons/parallel/CSI 40.35
39 TestAddons/parallel/Headlamp 17.79
40 TestAddons/parallel/CloudSpanner 6.56
41 TestAddons/parallel/LocalPath 53.35
42 TestAddons/parallel/NvidiaDevicePlugin 6.5
43 TestAddons/parallel/Yakd 11.72
44 TestAddons/StoppedEnableDisable 6.26
45 TestCertOptions 38.12
46 TestCertExpiration 239.3
48 TestForceSystemdFlag 41.37
49 TestForceSystemdEnv 42.69
55 TestErrorSpam/setup 29.73
56 TestErrorSpam/start 0.7
57 TestErrorSpam/status 1.04
58 TestErrorSpam/pause 1.8
59 TestErrorSpam/unpause 2.02
60 TestErrorSpam/stop 1.41
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 79.41
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 16.17
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.3
72 TestFunctional/serial/CacheCmd/cache/add_local 1.37
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.23
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 61.96
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.68
83 TestFunctional/serial/LogsFileCmd 1.69
84 TestFunctional/serial/InvalidService 4.02
86 TestFunctional/parallel/ConfigCmd 0.46
87 TestFunctional/parallel/DashboardCmd 9.38
88 TestFunctional/parallel/DryRun 0.42
89 TestFunctional/parallel/InternationalLanguage 0.2
90 TestFunctional/parallel/StatusCmd 1.2
94 TestFunctional/parallel/ServiceCmdConnect 12.71
95 TestFunctional/parallel/AddonsCmd 0.28
96 TestFunctional/parallel/PersistentVolumeClaim 27.61
98 TestFunctional/parallel/SSHCmd 0.65
99 TestFunctional/parallel/CpCmd 2.23
101 TestFunctional/parallel/FileSync 0.34
102 TestFunctional/parallel/CertSync 1.93
106 TestFunctional/parallel/NodeLabels 0.13
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
110 TestFunctional/parallel/License 0.26
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.48
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.15
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.03
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
123 TestFunctional/parallel/ServiceCmd/List 0.6
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
126 TestFunctional/parallel/ProfileCmd/profile_list 0.51
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
129 TestFunctional/parallel/MountCmd/any-port 11.35
130 TestFunctional/parallel/ServiceCmd/Format 0.51
131 TestFunctional/parallel/ServiceCmd/URL 0.43
132 TestFunctional/parallel/MountCmd/specific-port 2.04
133 TestFunctional/parallel/Version/short 0.08
134 TestFunctional/parallel/Version/components 1.14
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
139 TestFunctional/parallel/ImageCommands/ImageBuild 3.75
140 TestFunctional/parallel/ImageCommands/Setup 0.73
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.48
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.72
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.13
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.28
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.57
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
152 TestFunctional/delete_echo-server_images 0.03
153 TestFunctional/delete_my-image_image 0.01
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 170.21
159 TestMultiControlPlane/serial/DeployApp 11.09
160 TestMultiControlPlane/serial/PingHostFromPods 1.55
161 TestMultiControlPlane/serial/AddWorkerNode 61.17
162 TestMultiControlPlane/serial/NodeLabels 0.12
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.96
164 TestMultiControlPlane/serial/CopyFile 18.03
165 TestMultiControlPlane/serial/StopSecondaryNode 12.7
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
167 TestMultiControlPlane/serial/RestartSecondaryNode 22.01
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.4
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 237.36
170 TestMultiControlPlane/serial/DeleteSecondaryNode 12.5
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
172 TestMultiControlPlane/serial/StopCluster 35.79
173 TestMultiControlPlane/serial/RestartCluster 94.68
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
175 TestMultiControlPlane/serial/AddSecondaryNode 69.62
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.96
180 TestJSONOutput/start/Command 76.27
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.74
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.65
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.94
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.23
205 TestKicCustomNetwork/create_custom_network 38.45
206 TestKicCustomNetwork/use_default_bridge_network 35.61
207 TestKicExistingNetwork 32.75
208 TestKicCustomSubnet 32.93
209 TestKicStaticIP 34.5
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 64.67
214 TestMountStart/serial/StartWithMountFirst 6.72
215 TestMountStart/serial/VerifyMountFirst 0.26
216 TestMountStart/serial/StartWithMountSecond 9.28
217 TestMountStart/serial/VerifyMountSecond 0.24
218 TestMountStart/serial/DeleteFirst 1.61
219 TestMountStart/serial/VerifyMountPostDelete 0.25
220 TestMountStart/serial/Stop 1.21
221 TestMountStart/serial/RestartStopped 8.21
222 TestMountStart/serial/VerifyMountPostStop 0.24
225 TestMultiNode/serial/FreshStart2Nodes 107.04
226 TestMultiNode/serial/DeployApp2Nodes 7.6
227 TestMultiNode/serial/PingHostFrom2Pods 0.96
228 TestMultiNode/serial/AddNode 28.25
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.66
231 TestMultiNode/serial/CopyFile 9.71
232 TestMultiNode/serial/StopNode 2.21
233 TestMultiNode/serial/StartAfterStop 9.71
234 TestMultiNode/serial/RestartKeepsNodes 105.66
235 TestMultiNode/serial/DeleteNode 5.54
236 TestMultiNode/serial/StopMultiNode 23.88
237 TestMultiNode/serial/RestartMultiNode 47.05
238 TestMultiNode/serial/ValidateNameConflict 33.47
243 TestPreload 126.52
245 TestScheduledStopUnix 105.59
248 TestInsufficientStorage 10.18
249 TestRunningBinaryUpgrade 67.62
251 TestKubernetesUpgrade 395.92
252 TestMissingContainerUpgrade 166.34
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 39.77
256 TestNoKubernetes/serial/StartWithStopK8s 7.98
257 TestNoKubernetes/serial/Start 7.27
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
259 TestNoKubernetes/serial/ProfileList 1.18
260 TestNoKubernetes/serial/Stop 1.26
261 TestNoKubernetes/serial/StartNoArgs 7.15
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
263 TestStoppedBinaryUpgrade/Setup 0.61
264 TestStoppedBinaryUpgrade/Upgrade 76.56
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
274 TestPause/serial/Start 49.6
275 TestPause/serial/SecondStartNoReconfiguration 28.33
276 TestPause/serial/Pause 0.76
277 TestPause/serial/VerifyStatus 0.31
278 TestPause/serial/Unpause 0.71
279 TestPause/serial/PauseAgain 1.1
280 TestPause/serial/DeletePaused 2.67
281 TestPause/serial/VerifyDeletedResources 0.48
289 TestNetworkPlugins/group/false 4.92
294 TestStartStop/group/old-k8s-version/serial/FirstStart 164.19
295 TestStartStop/group/old-k8s-version/serial/DeployApp 10.88
297 TestStartStop/group/no-preload/serial/FirstStart 65.32
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.68
299 TestStartStop/group/old-k8s-version/serial/Stop 14.5
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
301 TestStartStop/group/old-k8s-version/serial/SecondStart 147.21
302 TestStartStop/group/no-preload/serial/DeployApp 10.46
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.65
304 TestStartStop/group/no-preload/serial/Stop 12.24
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
306 TestStartStop/group/no-preload/serial/SecondStart 267.71
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
310 TestStartStop/group/old-k8s-version/serial/Pause 2.93
312 TestStartStop/group/embed-certs/serial/FirstStart 80.84
313 TestStartStop/group/embed-certs/serial/DeployApp 13.35
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
315 TestStartStop/group/embed-certs/serial/Stop 12
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
317 TestStartStop/group/embed-certs/serial/SecondStart 275.38
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
321 TestStartStop/group/no-preload/serial/Pause 3.03
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.2
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.35
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.01
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.08
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
332 TestStartStop/group/embed-certs/serial/Pause 3
334 TestStartStop/group/newest-cni/serial/FirstStart 33.92
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
337 TestStartStop/group/newest-cni/serial/Stop 1.22
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
339 TestStartStop/group/newest-cni/serial/SecondStart 15.62
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
343 TestStartStop/group/newest-cni/serial/Pause 2.98
344 TestNetworkPlugins/group/auto/Start 49.34
345 TestNetworkPlugins/group/auto/KubeletFlags 0.28
346 TestNetworkPlugins/group/auto/NetCatPod 11.26
347 TestNetworkPlugins/group/auto/DNS 0.18
348 TestNetworkPlugins/group/auto/Localhost 0.16
349 TestNetworkPlugins/group/auto/HairPin 0.16
350 TestNetworkPlugins/group/kindnet/Start 76.01
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
353 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
354 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.69
355 TestNetworkPlugins/group/calico/Start 60.85
356 TestNetworkPlugins/group/kindnet/ControllerPod 6
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
358 TestNetworkPlugins/group/kindnet/NetCatPod 11.34
359 TestNetworkPlugins/group/kindnet/DNS 0.22
360 TestNetworkPlugins/group/kindnet/Localhost 0.16
361 TestNetworkPlugins/group/kindnet/HairPin 0.18
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.37
364 TestNetworkPlugins/group/calico/NetCatPod 13.35
365 TestNetworkPlugins/group/custom-flannel/Start 64.32
366 TestNetworkPlugins/group/calico/DNS 0.25
367 TestNetworkPlugins/group/calico/Localhost 0.19
368 TestNetworkPlugins/group/calico/HairPin 0.21
369 TestNetworkPlugins/group/enable-default-cni/Start 78.24
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.34
372 TestNetworkPlugins/group/custom-flannel/DNS 0.19
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
375 TestNetworkPlugins/group/flannel/Start 59.76
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.3
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
381 TestNetworkPlugins/group/bridge/Start 42.44
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
384 TestNetworkPlugins/group/flannel/NetCatPod 11.34
385 TestNetworkPlugins/group/flannel/DNS 0.27
386 TestNetworkPlugins/group/flannel/Localhost 0.28
387 TestNetworkPlugins/group/flannel/HairPin 0.17
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
389 TestNetworkPlugins/group/bridge/NetCatPod 12.39
390 TestNetworkPlugins/group/bridge/DNS 0.18
391 TestNetworkPlugins/group/bridge/Localhost 0.14
392 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (5.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-032798 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-032798 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.539543864s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0930 10:31:12.663196  575428 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0930 10:31:12.663275  575428 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-032798
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-032798: exit status 85 (62.408659ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-032798 | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |          |
	|         | -p download-only-032798        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:31:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:31:07.167572  575434 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:31:07.167767  575434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:31:07.167795  575434 out.go:358] Setting ErrFile to fd 2...
	I0930 10:31:07.167825  575434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:31:07.168081  575434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
	W0930 10:31:07.168251  575434 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19734-570035/.minikube/config/config.json: open /home/jenkins/minikube-integration/19734-570035/.minikube/config/config.json: no such file or directory
	I0930 10:31:07.168701  575434 out.go:352] Setting JSON to true
	I0930 10:31:07.169646  575434 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":123214,"bootTime":1727569054,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0930 10:31:07.169747  575434 start.go:139] virtualization:  
	I0930 10:31:07.174008  575434 out.go:97] [download-only-032798] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0930 10:31:07.174227  575434 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball: no such file or directory
	I0930 10:31:07.174294  575434 notify.go:220] Checking for updates...
	I0930 10:31:07.177087  575434 out.go:169] MINIKUBE_LOCATION=19734
	I0930 10:31:07.180031  575434 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:31:07.182806  575434 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	I0930 10:31:07.185696  575434 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	I0930 10:31:07.188463  575434 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0930 10:31:07.193285  575434 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 10:31:07.193607  575434 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:31:07.220525  575434 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:31:07.220631  575434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:31:07.269347  575434 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 10:31:07.260013343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:31:07.269458  575434 docker.go:318] overlay module found
	I0930 10:31:07.272113  575434 out.go:97] Using the docker driver based on user configuration
	I0930 10:31:07.272138  575434 start.go:297] selected driver: docker
	I0930 10:31:07.272146  575434 start.go:901] validating driver "docker" against <nil>
	I0930 10:31:07.272262  575434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:31:07.318806  575434 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 10:31:07.30882964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:31:07.319029  575434 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:31:07.319330  575434 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0930 10:31:07.319489  575434 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 10:31:07.322298  575434 out.go:169] Using Docker driver with root privileges
	I0930 10:31:07.324972  575434 cni.go:84] Creating CNI manager for ""
	I0930 10:31:07.325037  575434 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0930 10:31:07.325051  575434 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 10:31:07.325141  575434 start.go:340] cluster config:
	{Name:download-only-032798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-032798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:31:07.327875  575434 out.go:97] Starting "download-only-032798" primary control-plane node in "download-only-032798" cluster
	I0930 10:31:07.327901  575434 cache.go:121] Beginning downloading kic base image for docker with crio
	I0930 10:31:07.330656  575434 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0930 10:31:07.330682  575434 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 10:31:07.330841  575434 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0930 10:31:07.346265  575434 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0930 10:31:07.346829  575434 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0930 10:31:07.346933  575434 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0930 10:31:07.389785  575434 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0930 10:31:07.389816  575434 cache.go:56] Caching tarball of preloaded images
	I0930 10:31:07.389966  575434 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 10:31:07.393021  575434 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0930 10:31:07.393052  575434 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0930 10:31:07.475717  575434 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0930 10:31:10.966496  575434 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0930 10:31:10.966614  575434 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0930 10:31:11.463690  575434 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0930 10:31:12.106524  575434 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0930 10:31:12.106902  575434 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/download-only-032798/config.json ...
	I0930 10:31:12.106936  575434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/download-only-032798/config.json: {Name:mkef2e0ed12fb78d86ea8995d5870abc144c45bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:31:12.107119  575434 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 10:31:12.107304  575434 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19734-570035/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-032798 host does not exist
	  To start a cluster, run: "minikube start -p download-only-032798"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-032798
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (4.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-575153 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-575153 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.663712759s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (4.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0930 10:31:17.716635  575428 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0930 10:31:17.716673  575428 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-570035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-575153
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-575153: exit status 85 (62.875356ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-032798 | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | -p download-only-032798        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| delete  | -p download-only-032798        | download-only-032798 | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC | 30 Sep 24 10:31 UTC |
	| start   | -o=json --download-only        | download-only-575153 | jenkins | v1.34.0 | 30 Sep 24 10:31 UTC |                     |
	|         | -p download-only-575153        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:31:13
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:31:13.096872  575635 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:31:13.097118  575635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:31:13.097132  575635 out.go:358] Setting ErrFile to fd 2...
	I0930 10:31:13.097138  575635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:31:13.097427  575635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
	I0930 10:31:13.097904  575635 out.go:352] Setting JSON to true
	I0930 10:31:13.098844  575635 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":123219,"bootTime":1727569054,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0930 10:31:13.098920  575635 start.go:139] virtualization:  
	I0930 10:31:13.101194  575635 out.go:97] [download-only-575153] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 10:31:13.101377  575635 notify.go:220] Checking for updates...
	I0930 10:31:13.102593  575635 out.go:169] MINIKUBE_LOCATION=19734
	I0930 10:31:13.104026  575635 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:31:13.105608  575635 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	I0930 10:31:13.107774  575635 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	I0930 10:31:13.109284  575635 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0930 10:31:13.112116  575635 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 10:31:13.112427  575635 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:31:13.135062  575635 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:31:13.135183  575635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:31:13.196146  575635 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-30 10:31:13.18619514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:31:13.196258  575635 docker.go:318] overlay module found
	I0930 10:31:13.197787  575635 out.go:97] Using the docker driver based on user configuration
	I0930 10:31:13.197815  575635 start.go:297] selected driver: docker
	I0930 10:31:13.197822  575635 start.go:901] validating driver "docker" against <nil>
	I0930 10:31:13.197929  575635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:31:13.248506  575635 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-30 10:31:13.238301035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:31:13.248725  575635 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:31:13.249014  575635 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0930 10:31:13.249170  575635 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 10:31:13.250926  575635 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-575153 host does not exist
	  To start a cluster, run: "minikube start -p download-only-575153"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-575153
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I0930 10:31:18.901333  575428 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-919874 --alsologtostderr --binary-mirror http://127.0.0.1:44655 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-919874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-919874
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-718366
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-718366: exit status 85 (61.038711ms)

                                                
                                                
-- stdout --
	* Profile "addons-718366" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-718366"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-718366
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-718366: exit status 85 (80.785481ms)

                                                
                                                
-- stdout --
	* Profile "addons-718366" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-718366"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (210.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-718366 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-718366 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m30.714070705s)
--- PASS: TestAddons/Setup (210.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-718366 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-718366 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ltftl" [fd467ff5-40ce-4126-a461-ff0acaf75054] Running
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003942583s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-718366
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-718366: (5.751458834s)
--- PASS: TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0930 10:43:11.796484  575428 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0930 10:43:11.802218  575428 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0930 10:43:11.802252  575428 kapi.go:107] duration metric: took 5.780143ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 5.790309ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-718366 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-718366 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [bac5668e-cf26-49aa-8c34-f34063373958] Pending
helpers_test.go:344: "task-pv-pod" [bac5668e-cf26-49aa-8c34-f34063373958] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [bac5668e-cf26-49aa-8c34-f34063373958] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003518356s
addons_test.go:528: (dbg) Run:  kubectl --context addons-718366 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-718366 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-718366 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-718366 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-718366 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-718366 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-718366 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fbf3b22a-6ec2-4ffa-a8f9-548170738cde] Pending
helpers_test.go:344: "task-pv-pod-restore" [fbf3b22a-6ec2-4ffa-a8f9-548170738cde] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fbf3b22a-6ec2-4ffa-a8f9-548170738cde] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00476128s
addons_test.go:570: (dbg) Run:  kubectl --context addons-718366 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-718366 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-718366 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-718366 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.751128985s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-718366 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-znpjh" [ad46e32e-ba18-4c7d-993b-76f4abbf44b0] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-znpjh" [ad46e32e-ba18-4c7d-993b-76f4abbf44b0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-znpjh" [ad46e32e-ba18-4c7d-993b-76f4abbf44b0] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003511161s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-718366 addons disable headlamp --alsologtostderr -v=1: (5.826593012s)
--- PASS: TestAddons/parallel/Headlamp (17.79s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-jgnx2" [0e645fab-c1be-4b2c-ada9-b3534c23c10a] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003491257s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-718366
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.35s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-718366 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-718366 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718366 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [09e4bfc2-7999-4747-b898-ab51ad4f7524] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [09e4bfc2-7999-4747-b898-ab51ad4f7524] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [09e4bfc2-7999-4747-b898-ab51ad4f7524] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003451411s
addons_test.go:938: (dbg) Run:  kubectl --context addons-718366 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 ssh "cat /opt/local-path-provisioner/pvc-271312af-c6d1-4918-84a6-e0da61228c61_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-718366 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-718366 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-718366 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.30334095s)
--- PASS: TestAddons/parallel/LocalPath (53.35s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4vhfz" [409875b6-caeb-49b0-a6a3-4adab5c26abf] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004323722s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-718366
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-bjwsv" [faf8d469-c068-4ecf-ba18-9f52e7ca1017] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004674774s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-718366 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-718366 addons disable yakd --alsologtostderr -v=1: (5.717741525s)
--- PASS: TestAddons/parallel/Yakd (11.72s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-718366
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-718366: (5.971244675s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-718366
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-718366
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-718366
--- PASS: TestAddons/StoppedEnableDisable (6.26s)

                                                
                                    
x
+
TestCertOptions (38.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-120335 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-120335 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.495043243s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-120335 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-120335 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-120335 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-120335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-120335
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-120335: (1.96141978s)
--- PASS: TestCertOptions (38.12s)

                                                
                                    
x
+
TestCertExpiration (239.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-303949 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0930 11:31:16.886785  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-303949 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (37.79800101s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-303949 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0930 11:34:50.751355  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-303949 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.061089451s)
helpers_test.go:175: Cleaning up "cert-expiration-303949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-303949
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-303949: (2.434753409s)
--- PASS: TestCertExpiration (239.30s)

                                                
                                    
x
+
TestForceSystemdFlag (41.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-509799 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-509799 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.741496173s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-509799 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-509799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-509799
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-509799: (2.341367371s)
--- PASS: TestForceSystemdFlag (41.37s)

                                                
                                    
x
+
TestForceSystemdEnv (42.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-657421 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-657421 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.878198284s)
helpers_test.go:175: Cleaning up "force-systemd-env-657421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-657421
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-657421: (2.808021845s)
--- PASS: TestForceSystemdEnv (42.69s)

                                                
                                    
x
+
TestErrorSpam/setup (29.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-203687 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-203687 --driver=docker  --container-runtime=crio
E0930 10:49:50.751180  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:49:50.757883  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:49:50.769257  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:49:50.790627  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:49:50.831998  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:49:50.913418  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:49:51.074900  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:49:51.396561  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:49:52.038601  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:49:53.319954  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:49:55.881857  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:50:01.003954  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-203687 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-203687 --driver=docker  --container-runtime=crio: (29.729073276s)
--- PASS: TestErrorSpam/setup (29.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 status
--- PASS: TestErrorSpam/status (1.04s)

                                                
                                    
x
+
TestErrorSpam/pause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 pause
--- PASS: TestErrorSpam/pause (1.80s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.02s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 unpause
--- PASS: TestErrorSpam/unpause (2.02s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 stop
E0930 10:50:11.245422  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 stop: (1.222096716s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-203687 --log_dir /tmp/nospam-203687 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19734-570035/.minikube/files/etc/test/nested/copy/575428/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.41s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300388 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0930 10:50:31.726722  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:51:12.689922  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-300388 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.412634111s)
--- PASS: TestFunctional/serial/StartWithProxy (79.41s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (16.17s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0930 10:51:37.581795  575428 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300388 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-300388 --alsologtostderr -v=8: (16.163988885s)
functional_test.go:663: soft start took 16.171707479s for "functional-300388" cluster.
I0930 10:51:53.753290  575428 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (16.17s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-300388 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-300388 cache add registry.k8s.io/pause:3.1: (1.414354704s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-300388 cache add registry.k8s.io/pause:3.3: (1.43636163s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-300388 cache add registry.k8s.io/pause:latest: (1.44454505s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-300388 /tmp/TestFunctionalserialCacheCmdcacheadd_local319696118/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 cache add minikube-local-cache-test:functional-300388
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 cache delete minikube-local-cache-test:functional-300388
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-300388
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300388 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (294.299471ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-300388 cache reload: (1.222880986s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 kubectl -- --context functional-300388 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-300388 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (61.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300388 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0930 10:52:34.613675  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-300388 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m1.964128596s)
functional_test.go:761: restart took 1m1.964215445s for "functional-300388" cluster.
I0930 10:53:04.570909  575428 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (61.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-300388 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-300388 logs: (1.680860346s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 logs --file /tmp/TestFunctionalserialLogsFileCmd3172265223/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-300388 logs --file /tmp/TestFunctionalserialLogsFileCmd3172265223/001/logs.txt: (1.690216547s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.02s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-300388 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-300388
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-300388: exit status 115 (449.686195ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30168 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-300388 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300388 config get cpus: exit status 14 (69.717617ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300388 config get cpus: exit status 14 (83.997204ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-300388 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-300388 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 609022: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300388 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-300388 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (176.548282ms)

                                                
                                                
-- stdout --
	* [functional-300388] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:53:45.957541  608785 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:53:45.957759  608785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:53:45.957785  608785 out.go:358] Setting ErrFile to fd 2...
	I0930 10:53:45.957804  608785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:53:45.958078  608785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
	I0930 10:53:45.958494  608785 out.go:352] Setting JSON to false
	I0930 10:53:45.959504  608785 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":124572,"bootTime":1727569054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0930 10:53:45.959600  608785 start.go:139] virtualization:  
	I0930 10:53:45.962880  608785 out.go:177] * [functional-300388] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 10:53:45.966434  608785 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:53:45.966502  608785 notify.go:220] Checking for updates...
	I0930 10:53:45.972136  608785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:53:45.974744  608785 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	I0930 10:53:45.977302  608785 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	I0930 10:53:45.979909  608785 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 10:53:45.982539  608785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:53:45.985715  608785 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:53:45.986318  608785 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:53:46.008912  608785 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:53:46.009039  608785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:53:46.064410  608785 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 10:53:46.054099201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:53:46.064527  608785 docker.go:318] overlay module found
	I0930 10:53:46.067558  608785 out.go:177] * Using the docker driver based on existing profile
	I0930 10:53:46.070116  608785 start.go:297] selected driver: docker
	I0930 10:53:46.070131  608785 start.go:901] validating driver "docker" against &{Name:functional-300388 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-300388 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:53:46.070245  608785 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:53:46.073488  608785 out.go:201] 
	W0930 10:53:46.076033  608785 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0930 10:53:46.078595  608785 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300388 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300388 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-300388 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (201.266955ms)

                                                
                                                
-- stdout --
	* [functional-300388] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:53:45.777128  608725 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:53:45.777310  608725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:53:45.777320  608725 out.go:358] Setting ErrFile to fd 2...
	I0930 10:53:45.777356  608725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:53:45.777749  608725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
	I0930 10:53:45.778229  608725 out.go:352] Setting JSON to false
	I0930 10:53:45.779435  608725 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":124572,"bootTime":1727569054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0930 10:53:45.779509  608725 start.go:139] virtualization:  
	I0930 10:53:45.782824  608725 out.go:177] * [functional-300388] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0930 10:53:45.786206  608725 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:53:45.786295  608725 notify.go:220] Checking for updates...
	I0930 10:53:45.791519  608725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:53:45.794159  608725 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	I0930 10:53:45.796767  608725 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	I0930 10:53:45.799464  608725 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 10:53:45.802089  608725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:53:45.805153  608725 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:53:45.805766  608725 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:53:45.830073  608725 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:53:45.830196  608725 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:53:45.886026  608725 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 10:53:45.873017948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:53:45.886147  608725 docker.go:318] overlay module found
	I0930 10:53:45.890664  608725 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0930 10:53:45.893226  608725 start.go:297] selected driver: docker
	I0930 10:53:45.893247  608725 start.go:901] validating driver "docker" against &{Name:functional-300388 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-300388 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:53:45.893375  608725 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:53:45.896684  608725 out.go:201] 
	W0930 10:53:45.899359  608725 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0930 10:53:45.902115  608725 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-300388 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-300388 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-nph8g" [d2d994b2-3bbf-4132-811b-56f5d07c9af4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-nph8g" [d2d994b2-3bbf-4132-811b-56f5d07c9af4] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.005754944s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31149
functional_test.go:1675: http://192.168.49.2:31149: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-nph8g

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31149
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cfde75a9-df9b-4213-b268-0b4c1c3dea1e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003370277s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-300388 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-300388 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-300388 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-300388 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8722f5dd-44d3-42ce-a430-2bded2170c0a] Pending
helpers_test.go:344: "sp-pod" [8722f5dd-44d3-42ce-a430-2bded2170c0a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8722f5dd-44d3-42ce-a430-2bded2170c0a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003278414s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-300388 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-300388 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-300388 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0cceeb4e-b7ea-4d57-9ce4-614d6fe06e56] Pending
helpers_test.go:344: "sp-pod" [0cceeb4e-b7ea-4d57-9ce4-614d6fe06e56] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0cceeb4e-b7ea-4d57-9ce4-614d6fe06e56] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003474389s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-300388 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh -n functional-300388 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 cp functional-300388:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3401385550/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh -n functional-300388 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh -n functional-300388 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/575428/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "sudo cat /etc/test/nested/copy/575428/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/575428.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "sudo cat /etc/ssl/certs/575428.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/575428.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "sudo cat /usr/share/ca-certificates/575428.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5754282.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "sudo cat /etc/ssl/certs/5754282.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5754282.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "sudo cat /usr/share/ca-certificates/5754282.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-300388 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300388 ssh "sudo systemctl is-active docker": exit status 1 (313.338472ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300388 ssh "sudo systemctl is-active containerd": exit status 1 (305.437843ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-300388 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-300388 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-300388 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 606667: os: process already finished
helpers_test.go:508: unable to kill pid 606515: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-300388 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-300388 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-300388 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [53608bd9-b1ad-433f-b6c7-fe170c4c19af] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [53608bd9-b1ad-433f-b6c7-fe170c4c19af] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004643845s
I0930 10:53:22.292641  575428 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-300388 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.225.45 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-300388 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-300388 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-300388 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-gw9n2" [72b48777-50a7-4e00-85da-8f0208726860] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-gw9n2" [72b48777-50a7-4e00-85da-8f0208726860] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.00492412s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 service list -o json
functional_test.go:1494: Took "641.193346ms" to run "out/minikube-linux-arm64 -p functional-300388 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "444.768245ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "62.379841ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "387.473682ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "63.366377ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30243
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300388 /tmp/TestFunctionalparallelMountCmdany-port1457770838/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727693623374668597" to /tmp/TestFunctionalparallelMountCmdany-port1457770838/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727693623374668597" to /tmp/TestFunctionalparallelMountCmdany-port1457770838/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727693623374668597" to /tmp/TestFunctionalparallelMountCmdany-port1457770838/001/test-1727693623374668597
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300388 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (400.847781ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 10:53:43.775780  575428 retry.go:31] will retry after 346.31542ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 30 10:53 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 30 10:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 30 10:53 test-1727693623374668597
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh cat /mount-9p/test-1727693623374668597
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-300388 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b0417975-9268-43b0-a9e5-beb87f900f04] Pending
helpers_test.go:344: "busybox-mount" [b0417975-9268-43b0-a9e5-beb87f900f04] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b0417975-9268-43b0-a9e5-beb87f900f04] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b0417975-9268-43b0-a9e5-beb87f900f04] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004748459s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-300388 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300388 /tmp/TestFunctionalparallelMountCmdany-port1457770838/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30243
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300388 /tmp/TestFunctionalparallelMountCmdspecific-port1590105880/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300388 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (319.949621ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 10:53:55.048721  575428 retry.go:31] will retry after 586.183649ms: exit status 1
2024/09/30 10:53:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300388 /tmp/TestFunctionalparallelMountCmdspecific-port1590105880/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300388 ssh "sudo umount -f /mount-9p": exit status 1 (299.830609ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-300388 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300388 /tmp/TestFunctionalparallelMountCmdspecific-port1590105880/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-300388 version -o=json --components: (1.14129185s)
--- PASS: TestFunctional/parallel/Version/components (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300388 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-300388
localhost/kicbase/echo-server:functional-300388
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300388 image ls --format short --alsologtostderr:
I0930 10:54:04.173751  611564 out.go:345] Setting OutFile to fd 1 ...
I0930 10:54:04.173947  611564 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:54:04.173984  611564 out.go:358] Setting ErrFile to fd 2...
I0930 10:54:04.174020  611564 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:54:04.174377  611564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
I0930 10:54:04.175156  611564 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 10:54:04.175360  611564 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 10:54:04.175927  611564 cli_runner.go:164] Run: docker container inspect functional-300388 --format={{.State.Status}}
I0930 10:54:04.194571  611564 ssh_runner.go:195] Run: systemctl --version
I0930 10:54:04.194618  611564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300388
I0930 10:54:04.220090  611564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38998 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/functional-300388/id_rsa Username:docker}
I0930 10:54:04.310467  611564 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300388 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | b887aca7aed61 | 48.4MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| localhost/kicbase/echo-server           | functional-300388  | ce2d2cda2d858 | 4.79MB |
| localhost/minikube-local-cache-test     | functional-300388  | 8075e4edeab9d | 3.33kB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | latest             | 6e8672ddd037e | 197MB  |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300388 image ls --format table --alsologtostderr:
I0930 10:54:04.753891  611716 out.go:345] Setting OutFile to fd 1 ...
I0930 10:54:04.754081  611716 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:54:04.754094  611716 out.go:358] Setting ErrFile to fd 2...
I0930 10:54:04.754100  611716 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:54:04.754385  611716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
I0930 10:54:04.755044  611716 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 10:54:04.755200  611716 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 10:54:04.755716  611716 cli_runner.go:164] Run: docker container inspect functional-300388 --format={{.State.Status}}
I0930 10:54:04.775991  611716 ssh_runner.go:195] Run: systemctl --version
I0930 10:54:04.776048  611716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300388
I0930 10:54:04.792889  611716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38998 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/functional-300388/id_rsa Username:docker}
I0930 10:54:04.890171  611716 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300388 image ls --format json --alsologtostderr:
[{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":
"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-300388"],"size":"4788229"},{"id":"279f381cb37365bbbc
d133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa00243
04a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53","docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48375489"},{"id":"8075e4edeab9dfc831d92770029c2015d5ec2301bceece65a5cda96a7b7a4c60","repoDigests":["localhost/minikube-local-cache-test@sha256:0efe6c4c845f5f1c9086e8fa69ab85591374abb9cc47a2701bdeb5a49a0a37d2"],"repoTags":["localhost/minikube-local-cache-test:functional-300388"],"size":"3328"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0
"],"size":"139912446"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2","repoDigests":["docker.io/library/nginx@sha256:1b1f09a6239162ae97b9d262db13572367bd4fa2c9d27adb75aface0223b9c09","docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172541"},{"id":"1611cd07b61d57dbbfebe6db2
42513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests"
:["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300388 image ls --format json --alsologtostderr:
I0930 10:54:04.455695  611632 out.go:345] Setting OutFile to fd 1 ...
I0930 10:54:04.455937  611632 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:54:04.455951  611632 out.go:358] Setting ErrFile to fd 2...
I0930 10:54:04.455957  611632 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:54:04.456280  611632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
I0930 10:54:04.457227  611632 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 10:54:04.457364  611632 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 10:54:04.458191  611632 cli_runner.go:164] Run: docker container inspect functional-300388 --format={{.State.Status}}
I0930 10:54:04.482312  611632 ssh_runner.go:195] Run: systemctl --version
I0930 10:54:04.482374  611632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300388
I0930 10:54:04.500153  611632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38998 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/functional-300388/id_rsa Username:docker}
I0930 10:54:04.627570  611632 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300388 image ls --format yaml --alsologtostderr:
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "48375489"
- id: 6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2
repoDigests:
- docker.io/library/nginx@sha256:1b1f09a6239162ae97b9d262db13572367bd4fa2c9d27adb75aface0223b9c09
- docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb
repoTags:
- docker.io/library/nginx:latest
size: "197172541"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-300388
size: "4788229"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 8075e4edeab9dfc831d92770029c2015d5ec2301bceece65a5cda96a7b7a4c60
repoDigests:
- localhost/minikube-local-cache-test@sha256:0efe6c4c845f5f1c9086e8fa69ab85591374abb9cc47a2701bdeb5a49a0a37d2
repoTags:
- localhost/minikube-local-cache-test:functional-300388
size: "3328"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300388 image ls --format yaml --alsologtostderr:
I0930 10:54:04.172340  611565 out.go:345] Setting OutFile to fd 1 ...
I0930 10:54:04.172501  611565 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:54:04.172509  611565 out.go:358] Setting ErrFile to fd 2...
I0930 10:54:04.172515  611565 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:54:04.172747  611565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
I0930 10:54:04.173333  611565 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 10:54:04.173447  611565 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 10:54:04.173975  611565 cli_runner.go:164] Run: docker container inspect functional-300388 --format={{.State.Status}}
I0930 10:54:04.193092  611565 ssh_runner.go:195] Run: systemctl --version
I0930 10:54:04.193149  611565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300388
I0930 10:54:04.213661  611565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38998 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/functional-300388/id_rsa Username:docker}
I0930 10:54:04.307046  611565 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300388 ssh pgrep buildkitd: exit status 1 (334.179749ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image build -t localhost/my-image:functional-300388 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-300388 image build -t localhost/my-image:functional-300388 testdata/build --alsologtostderr: (3.172970035s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300388 image build -t localhost/my-image:functional-300388 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a10209502e0
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-300388
--> d9bbcc81aa5
Successfully tagged localhost/my-image:functional-300388
d9bbcc81aa50616ba9dfa414d136b499655e6c215176c44777b53c46a616cb8d
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300388 image build -t localhost/my-image:functional-300388 testdata/build --alsologtostderr:
I0930 10:54:04.761909  611722 out.go:345] Setting OutFile to fd 1 ...
I0930 10:54:04.762887  611722 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:54:04.762903  611722 out.go:358] Setting ErrFile to fd 2...
I0930 10:54:04.762908  611722 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:54:04.763154  611722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
I0930 10:54:04.763824  611722 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 10:54:04.764345  611722 config.go:182] Loaded profile config "functional-300388": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 10:54:04.764894  611722 cli_runner.go:164] Run: docker container inspect functional-300388 --format={{.State.Status}}
I0930 10:54:04.786987  611722 ssh_runner.go:195] Run: systemctl --version
I0930 10:54:04.787045  611722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300388
I0930 10:54:04.806246  611722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38998 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/functional-300388/id_rsa Username:docker}
I0930 10:54:04.909364  611722 build_images.go:161] Building image from path: /tmp/build.3796966879.tar
I0930 10:54:04.909441  611722 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0930 10:54:04.923880  611722 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3796966879.tar
I0930 10:54:04.928218  611722 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3796966879.tar: stat -c "%s %y" /var/lib/minikube/build/build.3796966879.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3796966879.tar': No such file or directory
I0930 10:54:04.928293  611722 ssh_runner.go:362] scp /tmp/build.3796966879.tar --> /var/lib/minikube/build/build.3796966879.tar (3072 bytes)
I0930 10:54:04.968203  611722 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3796966879
I0930 10:54:04.977343  611722 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3796966879 -xf /var/lib/minikube/build/build.3796966879.tar
I0930 10:54:04.986281  611722 crio.go:315] Building image: /var/lib/minikube/build/build.3796966879
I0930 10:54:04.986366  611722 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-300388 /var/lib/minikube/build/build.3796966879 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0930 10:54:07.843570  611722 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-300388 /var/lib/minikube/build/build.3796966879 --cgroup-manager=cgroupfs: (2.8571748s)
I0930 10:54:07.843654  611722 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3796966879
I0930 10:54:07.852786  611722 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3796966879.tar
I0930 10:54:07.862363  611722 build_images.go:217] Built localhost/my-image:functional-300388 from /tmp/build.3796966879.tar
I0930 10:54:07.862393  611722 build_images.go:133] succeeded building to: functional-300388
I0930 10:54:07.862399  611722 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-300388
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300388 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2756218942/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300388 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2756218942/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300388 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2756218942/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-300388 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300388 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2756218942/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300388 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2756218942/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300388 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2756218942/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image load --daemon kicbase/echo-server:functional-300388 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-300388 image load --daemon kicbase/echo-server:functional-300388 --alsologtostderr: (1.446638962s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image load --daemon kicbase/echo-server:functional-300388 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-300388
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image load --daemon kicbase/echo-server:functional-300388 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image save kicbase/echo-server:functional-300388 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image rm kicbase/echo-server:functional-300388 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-300388
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-300388 image save --daemon kicbase/echo-server:functional-300388 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-300388
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-300388
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-300388
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-300388
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (170.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-367876 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0930 10:54:50.752628  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:55:18.456001  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-367876 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m49.370841856s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (170.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (11.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-367876 -- rollout status deployment/busybox: (8.155744818s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-g8tw8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-h4dw7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-zckdp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-g8tw8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-h4dw7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-zckdp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-g8tw8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-h4dw7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-zckdp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (11.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-g8tw8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-g8tw8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-h4dw7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-h4dw7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-zckdp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-367876 -- exec busybox-7dff88458-zckdp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-367876 -v=7 --alsologtostderr
E0930 10:58:13.817249  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:58:13.823625  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:58:13.835072  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:58:13.856505  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:58:13.897917  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-367876 -v=7 --alsologtostderr: (1m0.235670188s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 status -v=7 --alsologtostderr
E0930 10:58:13.979465  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:58:14.140967  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:58:14.462853  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-367876 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0930 10:58:15.107799  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 status --output json -v=7 --alsologtostderr
E0930 10:58:16.390015  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp testdata/cp-test.txt ha-367876:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2675418249/001/cp-test_ha-367876.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876:/home/docker/cp-test.txt ha-367876-m02:/home/docker/cp-test_ha-367876_ha-367876-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m02 "sudo cat /home/docker/cp-test_ha-367876_ha-367876-m02.txt"
E0930 10:58:18.952071  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876:/home/docker/cp-test.txt ha-367876-m03:/home/docker/cp-test_ha-367876_ha-367876-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m03 "sudo cat /home/docker/cp-test_ha-367876_ha-367876-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876:/home/docker/cp-test.txt ha-367876-m04:/home/docker/cp-test_ha-367876_ha-367876-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m04 "sudo cat /home/docker/cp-test_ha-367876_ha-367876-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp testdata/cp-test.txt ha-367876-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2675418249/001/cp-test_ha-367876-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876-m02:/home/docker/cp-test.txt ha-367876:/home/docker/cp-test_ha-367876-m02_ha-367876.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876 "sudo cat /home/docker/cp-test_ha-367876-m02_ha-367876.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876-m02:/home/docker/cp-test.txt ha-367876-m03:/home/docker/cp-test_ha-367876-m02_ha-367876-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m02 "sudo cat /home/docker/cp-test.txt"
E0930 10:58:24.073856  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m03 "sudo cat /home/docker/cp-test_ha-367876-m02_ha-367876-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876-m02:/home/docker/cp-test.txt ha-367876-m04:/home/docker/cp-test_ha-367876-m02_ha-367876-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m04 "sudo cat /home/docker/cp-test_ha-367876-m02_ha-367876-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp testdata/cp-test.txt ha-367876-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2675418249/001/cp-test_ha-367876-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876-m03:/home/docker/cp-test.txt ha-367876:/home/docker/cp-test_ha-367876-m03_ha-367876.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876 "sudo cat /home/docker/cp-test_ha-367876-m03_ha-367876.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876-m03:/home/docker/cp-test.txt ha-367876-m02:/home/docker/cp-test_ha-367876-m03_ha-367876-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m02 "sudo cat /home/docker/cp-test_ha-367876-m03_ha-367876-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876-m03:/home/docker/cp-test.txt ha-367876-m04:/home/docker/cp-test_ha-367876-m03_ha-367876-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m04 "sudo cat /home/docker/cp-test_ha-367876-m03_ha-367876-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp testdata/cp-test.txt ha-367876-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2675418249/001/cp-test_ha-367876-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876-m04:/home/docker/cp-test.txt ha-367876:/home/docker/cp-test_ha-367876-m04_ha-367876.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876 "sudo cat /home/docker/cp-test_ha-367876-m04_ha-367876.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876-m04:/home/docker/cp-test.txt ha-367876-m02:/home/docker/cp-test_ha-367876-m04_ha-367876-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m02 "sudo cat /home/docker/cp-test_ha-367876-m04_ha-367876-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 cp ha-367876-m04:/home/docker/cp-test.txt ha-367876-m03:/home/docker/cp-test_ha-367876-m04_ha-367876-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 ssh -n ha-367876-m03 "sudo cat /home/docker/cp-test_ha-367876-m04_ha-367876-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 node stop m02 -v=7 --alsologtostderr
E0930 10:58:34.315251  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-367876 node stop m02 -v=7 --alsologtostderr: (11.982848269s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-367876 status -v=7 --alsologtostderr: exit status 7 (712.164658ms)

                                                
                                                
-- stdout --
	ha-367876
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-367876-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-367876-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-367876-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:58:46.058877  627540 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:58:46.059042  627540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:58:46.059055  627540 out.go:358] Setting ErrFile to fd 2...
	I0930 10:58:46.059060  627540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:58:46.059312  627540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
	I0930 10:58:46.059510  627540 out.go:352] Setting JSON to false
	I0930 10:58:46.059557  627540 mustload.go:65] Loading cluster: ha-367876
	I0930 10:58:46.059659  627540 notify.go:220] Checking for updates...
	I0930 10:58:46.060044  627540 config.go:182] Loaded profile config "ha-367876": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:58:46.060059  627540 status.go:174] checking status of ha-367876 ...
	I0930 10:58:46.060597  627540 cli_runner.go:164] Run: docker container inspect ha-367876 --format={{.State.Status}}
	I0930 10:58:46.083744  627540 status.go:364] ha-367876 host status = "Running" (err=<nil>)
	I0930 10:58:46.083771  627540 host.go:66] Checking if "ha-367876" exists ...
	I0930 10:58:46.084088  627540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-367876
	I0930 10:58:46.108275  627540 host.go:66] Checking if "ha-367876" exists ...
	I0930 10:58:46.108591  627540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:58:46.108652  627540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-367876
	I0930 10:58:46.129237  627540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39003 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/ha-367876/id_rsa Username:docker}
	I0930 10:58:46.219174  627540 ssh_runner.go:195] Run: systemctl --version
	I0930 10:58:46.223672  627540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:58:46.235834  627540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:58:46.291380  627540 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-30 10:58:46.280512336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:58:46.292035  627540 kubeconfig.go:125] found "ha-367876" server: "https://192.168.49.254:8443"
	I0930 10:58:46.292072  627540 api_server.go:166] Checking apiserver status ...
	I0930 10:58:46.292116  627540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:58:46.303888  627540 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1403/cgroup
	I0930 10:58:46.313593  627540 api_server.go:182] apiserver freezer: "12:freezer:/docker/de6374bb38ca92b1c232dbecdfc6a2b099a51cff460a70205831d7c1ecacc794/crio/crio-1dbe11e4a359a3ab6dbb91a25bd1d3a4e58f0ace0df10038e7b7d7271aff2fb6"
	I0930 10:58:46.313672  627540 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/de6374bb38ca92b1c232dbecdfc6a2b099a51cff460a70205831d7c1ecacc794/crio/crio-1dbe11e4a359a3ab6dbb91a25bd1d3a4e58f0ace0df10038e7b7d7271aff2fb6/freezer.state
	I0930 10:58:46.322105  627540 api_server.go:204] freezer state: "THAWED"
	I0930 10:58:46.322135  627540 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0930 10:58:46.329891  627540 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0930 10:58:46.329919  627540 status.go:456] ha-367876 apiserver status = Running (err=<nil>)
	I0930 10:58:46.329929  627540 status.go:176] ha-367876 status: &{Name:ha-367876 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:58:46.329945  627540 status.go:174] checking status of ha-367876-m02 ...
	I0930 10:58:46.330257  627540 cli_runner.go:164] Run: docker container inspect ha-367876-m02 --format={{.State.Status}}
	I0930 10:58:46.347712  627540 status.go:364] ha-367876-m02 host status = "Stopped" (err=<nil>)
	I0930 10:58:46.347736  627540 status.go:377] host is not running, skipping remaining checks
	I0930 10:58:46.347743  627540 status.go:176] ha-367876-m02 status: &{Name:ha-367876-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:58:46.347769  627540 status.go:174] checking status of ha-367876-m03 ...
	I0930 10:58:46.348102  627540 cli_runner.go:164] Run: docker container inspect ha-367876-m03 --format={{.State.Status}}
	I0930 10:58:46.372619  627540 status.go:364] ha-367876-m03 host status = "Running" (err=<nil>)
	I0930 10:58:46.372643  627540 host.go:66] Checking if "ha-367876-m03" exists ...
	I0930 10:58:46.372941  627540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-367876-m03
	I0930 10:58:46.390539  627540 host.go:66] Checking if "ha-367876-m03" exists ...
	I0930 10:58:46.390871  627540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:58:46.390925  627540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-367876-m03
	I0930 10:58:46.407454  627540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39013 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/ha-367876-m03/id_rsa Username:docker}
	I0930 10:58:46.502465  627540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:58:46.515304  627540 kubeconfig.go:125] found "ha-367876" server: "https://192.168.49.254:8443"
	I0930 10:58:46.515359  627540 api_server.go:166] Checking apiserver status ...
	I0930 10:58:46.515404  627540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:58:46.526435  627540 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1346/cgroup
	I0930 10:58:46.543897  627540 api_server.go:182] apiserver freezer: "12:freezer:/docker/06cc58b65098b2ec85d33f81b677c473569cf921e9d8707c8c8e3231afa250cd/crio/crio-fb2044cbe633e29492221763ee86cf62101b870ff4e4f787b86370fab5435dad"
	I0930 10:58:46.543987  627540 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/06cc58b65098b2ec85d33f81b677c473569cf921e9d8707c8c8e3231afa250cd/crio/crio-fb2044cbe633e29492221763ee86cf62101b870ff4e4f787b86370fab5435dad/freezer.state
	I0930 10:58:46.555523  627540 api_server.go:204] freezer state: "THAWED"
	I0930 10:58:46.555594  627540 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0930 10:58:46.565345  627540 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0930 10:58:46.565372  627540 status.go:456] ha-367876-m03 apiserver status = Running (err=<nil>)
	I0930 10:58:46.565382  627540 status.go:176] ha-367876-m03 status: &{Name:ha-367876-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:58:46.565406  627540 status.go:174] checking status of ha-367876-m04 ...
	I0930 10:58:46.565830  627540 cli_runner.go:164] Run: docker container inspect ha-367876-m04 --format={{.State.Status}}
	I0930 10:58:46.583898  627540 status.go:364] ha-367876-m04 host status = "Running" (err=<nil>)
	I0930 10:58:46.583927  627540 host.go:66] Checking if "ha-367876-m04" exists ...
	I0930 10:58:46.584283  627540 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-367876-m04
	I0930 10:58:46.601196  627540 host.go:66] Checking if "ha-367876-m04" exists ...
	I0930 10:58:46.601524  627540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:58:46.601629  627540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-367876-m04
	I0930 10:58:46.618152  627540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39018 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/ha-367876-m04/id_rsa Username:docker}
	I0930 10:58:46.706580  627540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:58:46.718922  627540 status.go:176] ha-367876-m04 status: &{Name:ha-367876-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 node start m02 -v=7 --alsologtostderr
E0930 10:58:54.797138  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-367876 node start m02 -v=7 --alsologtostderr: (20.35631789s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-367876 status -v=7 --alsologtostderr: (1.497539664s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.400601057s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (237.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-367876 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-367876 -v=7 --alsologtostderr
E0930 10:59:35.758916  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-367876 -v=7 --alsologtostderr: (36.995126194s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-367876 --wait=true -v=7 --alsologtostderr
E0930 10:59:50.751718  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:00:57.681008  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-367876 --wait=true -v=7 --alsologtostderr: (3m20.17256995s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-367876
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (237.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 node delete m03 -v=7 --alsologtostderr
E0930 11:03:13.817511  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-367876 node delete m03 -v=7 --alsologtostderr: (11.564938938s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 stop -v=7 --alsologtostderr
E0930 11:03:41.522339  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-367876 stop -v=7 --alsologtostderr: (35.680712606s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-367876 status -v=7 --alsologtostderr: exit status 7 (109.925974ms)

                                                
                                                
-- stdout --
	ha-367876
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-367876-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-367876-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:03:57.187505  642140 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:03:57.187733  642140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:03:57.187765  642140 out.go:358] Setting ErrFile to fd 2...
	I0930 11:03:57.187785  642140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:03:57.188039  642140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
	I0930 11:03:57.188252  642140 out.go:352] Setting JSON to false
	I0930 11:03:57.188313  642140 mustload.go:65] Loading cluster: ha-367876
	I0930 11:03:57.188403  642140 notify.go:220] Checking for updates...
	I0930 11:03:57.188790  642140 config.go:182] Loaded profile config "ha-367876": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:03:57.188833  642140 status.go:174] checking status of ha-367876 ...
	I0930 11:03:57.189809  642140 cli_runner.go:164] Run: docker container inspect ha-367876 --format={{.State.Status}}
	I0930 11:03:57.208069  642140 status.go:364] ha-367876 host status = "Stopped" (err=<nil>)
	I0930 11:03:57.208088  642140 status.go:377] host is not running, skipping remaining checks
	I0930 11:03:57.208095  642140 status.go:176] ha-367876 status: &{Name:ha-367876 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 11:03:57.208118  642140 status.go:174] checking status of ha-367876-m02 ...
	I0930 11:03:57.208429  642140 cli_runner.go:164] Run: docker container inspect ha-367876-m02 --format={{.State.Status}}
	I0930 11:03:57.227938  642140 status.go:364] ha-367876-m02 host status = "Stopped" (err=<nil>)
	I0930 11:03:57.227957  642140 status.go:377] host is not running, skipping remaining checks
	I0930 11:03:57.227963  642140 status.go:176] ha-367876-m02 status: &{Name:ha-367876-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 11:03:57.227983  642140 status.go:174] checking status of ha-367876-m04 ...
	I0930 11:03:57.228275  642140 cli_runner.go:164] Run: docker container inspect ha-367876-m04 --format={{.State.Status}}
	I0930 11:03:57.246313  642140 status.go:364] ha-367876-m04 host status = "Stopped" (err=<nil>)
	I0930 11:03:57.246334  642140 status.go:377] host is not running, skipping remaining checks
	I0930 11:03:57.246341  642140 status.go:176] ha-367876-m04 status: &{Name:ha-367876-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (94.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-367876 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0930 11:04:50.751712  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-367876 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m33.765246919s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (94.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (69.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-367876 --control-plane -v=7 --alsologtostderr
E0930 11:06:13.817429  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-367876 --control-plane -v=7 --alsologtostderr: (1m8.679654754s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-367876 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (69.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.27s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-855813 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-855813 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m16.270325588s)
--- PASS: TestJSONOutput/start/Command (76.27s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-855813 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-855813 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-855813 --output=json --user=testUser
E0930 11:08:13.818451  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-855813 --output=json --user=testUser: (5.936573935s)
--- PASS: TestJSONOutput/stop/Command (5.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-610603 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-610603 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.531425ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a774083-d914-4c94-b1aa-a370453553d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-610603] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e99a36cd-d0c9-4f41-b2c8-b6df354d6b53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19734"}}
	{"specversion":"1.0","id":"7ef29123-4061-4eed-88e7-65a2fe0cf04d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d99a9b39-37c5-4304-a23b-003ecf7644ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig"}}
	{"specversion":"1.0","id":"214d50b1-fb3c-4bff-bfc3-791952882371","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube"}}
	{"specversion":"1.0","id":"703fb3d0-8cf6-484c-a359-136ad620726a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d1570eda-a527-48b7-92c8-fa8772ccb293","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"46804825-8d87-499a-bd0c-0fc1cd12772e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-610603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-610603
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-358822 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-358822 --network=: (36.315737446s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-358822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-358822
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-358822: (2.115069373s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.45s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-944483 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-944483 --network=bridge: (33.610606162s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-944483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-944483
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-944483: (1.982949645s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.61s)

                                                
                                    
x
+
TestKicExistingNetwork (32.75s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0930 11:09:33.214840  575428 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0930 11:09:33.230175  575428 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0930 11:09:33.231890  575428 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0930 11:09:33.231934  575428 cli_runner.go:164] Run: docker network inspect existing-network
W0930 11:09:33.247507  575428 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0930 11:09:33.247542  575428 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0930 11:09:33.247561  575428 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0930 11:09:33.247661  575428 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0930 11:09:33.265523  575428 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e5faabc62538 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:26:3e:3e:9b} reservation:<nil>}
I0930 11:09:33.266490  575428 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a19b30}
I0930 11:09:33.266528  575428 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0930 11:09:33.266581  575428 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0930 11:09:33.339477  575428 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-502081 --network=existing-network
E0930 11:09:50.750671  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-502081 --network=existing-network: (30.673509064s)
helpers_test.go:175: Cleaning up "existing-network-502081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-502081
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-502081: (1.924138166s)
I0930 11:10:05.952379  575428 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.75s)

                                                
                                    
x
+
TestKicCustomSubnet (32.93s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-567370 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-567370 --subnet=192.168.60.0/24: (30.802060351s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-567370 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-567370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-567370
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-567370: (2.11201702s)
--- PASS: TestKicCustomSubnet (32.93s)

                                                
                                    
x
+
TestKicStaticIP (34.5s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-030534 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-030534 --static-ip=192.168.200.200: (32.288269543s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-030534 ip
helpers_test.go:175: Cleaning up "static-ip-030534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-030534
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-030534: (2.054851478s)
--- PASS: TestKicStaticIP (34.50s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (64.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-397207 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-397207 --driver=docker  --container-runtime=crio: (27.621190809s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-399742 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-399742 --driver=docker  --container-runtime=crio: (31.747544638s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-397207
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-399742
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-399742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-399742
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-399742: (1.978147876s)
helpers_test.go:175: Cleaning up "first-397207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-397207
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-397207: (1.954306945s)
--- PASS: TestMinikubeProfile (64.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-437102 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-437102 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.715005995s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-437102 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-438964 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-438964 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.282626889s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-438964 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-437102 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-437102 --alsologtostderr -v=5: (1.610930674s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-438964 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-438964
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-438964: (1.211192852s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-438964
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-438964: (7.209013195s)
--- PASS: TestMountStart/serial/RestartStopped (8.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-438964 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-860188 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0930 11:13:13.816964  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-860188 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m46.560454016s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- rollout status deployment/busybox
E0930 11:14:36.884165  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-860188 -- rollout status deployment/busybox: (5.660301588s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- exec busybox-7dff88458-bql78 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- exec busybox-7dff88458-lzgdt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- exec busybox-7dff88458-bql78 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- exec busybox-7dff88458-lzgdt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- exec busybox-7dff88458-bql78 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- exec busybox-7dff88458-lzgdt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- exec busybox-7dff88458-bql78 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- exec busybox-7dff88458-bql78 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- exec busybox-7dff88458-lzgdt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-860188 -- exec busybox-7dff88458-lzgdt -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-860188 -v 3 --alsologtostderr
E0930 11:14:50.751687  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-860188 -v 3 --alsologtostderr: (27.599372697s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-860188 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 cp testdata/cp-test.txt multinode-860188:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 cp multinode-860188:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3007576177/001/cp-test_multinode-860188.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 cp multinode-860188:/home/docker/cp-test.txt multinode-860188-m02:/home/docker/cp-test_multinode-860188_multinode-860188-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188-m02 "sudo cat /home/docker/cp-test_multinode-860188_multinode-860188-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 cp multinode-860188:/home/docker/cp-test.txt multinode-860188-m03:/home/docker/cp-test_multinode-860188_multinode-860188-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188-m03 "sudo cat /home/docker/cp-test_multinode-860188_multinode-860188-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 cp testdata/cp-test.txt multinode-860188-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 cp multinode-860188-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3007576177/001/cp-test_multinode-860188-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 cp multinode-860188-m02:/home/docker/cp-test.txt multinode-860188:/home/docker/cp-test_multinode-860188-m02_multinode-860188.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188 "sudo cat /home/docker/cp-test_multinode-860188-m02_multinode-860188.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 cp multinode-860188-m02:/home/docker/cp-test.txt multinode-860188-m03:/home/docker/cp-test_multinode-860188-m02_multinode-860188-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188-m03 "sudo cat /home/docker/cp-test_multinode-860188-m02_multinode-860188-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 cp testdata/cp-test.txt multinode-860188-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 cp multinode-860188-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3007576177/001/cp-test_multinode-860188-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 cp multinode-860188-m03:/home/docker/cp-test.txt multinode-860188:/home/docker/cp-test_multinode-860188-m03_multinode-860188.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188 "sudo cat /home/docker/cp-test_multinode-860188-m03_multinode-860188.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 cp multinode-860188-m03:/home/docker/cp-test.txt multinode-860188-m02:/home/docker/cp-test_multinode-860188-m03_multinode-860188-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 ssh -n multinode-860188-m02 "sudo cat /home/docker/cp-test_multinode-860188-m03_multinode-860188-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-860188 node stop m03: (1.209912892s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-860188 status: exit status 7 (485.821068ms)

                                                
                                                
-- stdout --
	multinode-860188
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-860188-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-860188-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-860188 status --alsologtostderr: exit status 7 (509.163274ms)

                                                
                                                
-- stdout --
	multinode-860188
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-860188-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-860188-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:15:24.002964  695172 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:15:24.003142  695172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:15:24.003154  695172 out.go:358] Setting ErrFile to fd 2...
	I0930 11:15:24.003159  695172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:15:24.003434  695172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
	I0930 11:15:24.003644  695172 out.go:352] Setting JSON to false
	I0930 11:15:24.003677  695172 mustload.go:65] Loading cluster: multinode-860188
	I0930 11:15:24.003844  695172 notify.go:220] Checking for updates...
	I0930 11:15:24.004184  695172 config.go:182] Loaded profile config "multinode-860188": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:15:24.004199  695172 status.go:174] checking status of multinode-860188 ...
	I0930 11:15:24.004779  695172 cli_runner.go:164] Run: docker container inspect multinode-860188 --format={{.State.Status}}
	I0930 11:15:24.024132  695172 status.go:364] multinode-860188 host status = "Running" (err=<nil>)
	I0930 11:15:24.024156  695172 host.go:66] Checking if "multinode-860188" exists ...
	I0930 11:15:24.024495  695172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860188
	I0930 11:15:24.053910  695172 host.go:66] Checking if "multinode-860188" exists ...
	I0930 11:15:24.054222  695172 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 11:15:24.054273  695172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860188
	I0930 11:15:24.073796  695172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39123 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/multinode-860188/id_rsa Username:docker}
	I0930 11:15:24.166856  695172 ssh_runner.go:195] Run: systemctl --version
	I0930 11:15:24.171124  695172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:15:24.182560  695172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 11:15:24.241377  695172 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-30 11:15:24.231799695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 11:15:24.242061  695172 kubeconfig.go:125] found "multinode-860188" server: "https://192.168.67.2:8443"
	I0930 11:15:24.242097  695172 api_server.go:166] Checking apiserver status ...
	I0930 11:15:24.242141  695172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:15:24.252895  695172 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1349/cgroup
	I0930 11:15:24.262003  695172 api_server.go:182] apiserver freezer: "12:freezer:/docker/7235572fa24566846aaf784494be866778e8ac721e40caba7dde95c433fce1d8/crio/crio-85d5b5b62c44f350028de93e3c9a6ad9cae2467bfc1a71666fab0c4e1883e24a"
	I0930 11:15:24.262074  695172 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7235572fa24566846aaf784494be866778e8ac721e40caba7dde95c433fce1d8/crio/crio-85d5b5b62c44f350028de93e3c9a6ad9cae2467bfc1a71666fab0c4e1883e24a/freezer.state
	I0930 11:15:24.270912  695172 api_server.go:204] freezer state: "THAWED"
	I0930 11:15:24.270943  695172 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0930 11:15:24.278458  695172 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0930 11:15:24.278487  695172 status.go:456] multinode-860188 apiserver status = Running (err=<nil>)
	I0930 11:15:24.278497  695172 status.go:176] multinode-860188 status: &{Name:multinode-860188 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 11:15:24.278513  695172 status.go:174] checking status of multinode-860188-m02 ...
	I0930 11:15:24.278831  695172 cli_runner.go:164] Run: docker container inspect multinode-860188-m02 --format={{.State.Status}}
	I0930 11:15:24.295094  695172 status.go:364] multinode-860188-m02 host status = "Running" (err=<nil>)
	I0930 11:15:24.295122  695172 host.go:66] Checking if "multinode-860188-m02" exists ...
	I0930 11:15:24.295433  695172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-860188-m02
	I0930 11:15:24.311805  695172 host.go:66] Checking if "multinode-860188-m02" exists ...
	I0930 11:15:24.312125  695172 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 11:15:24.312169  695172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-860188-m02
	I0930 11:15:24.330248  695172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:39128 SSHKeyPath:/home/jenkins/minikube-integration/19734-570035/.minikube/machines/multinode-860188-m02/id_rsa Username:docker}
	I0930 11:15:24.419041  695172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:15:24.437469  695172 status.go:176] multinode-860188-m02 status: &{Name:multinode-860188-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0930 11:15:24.437515  695172 status.go:174] checking status of multinode-860188-m03 ...
	I0930 11:15:24.437903  695172 cli_runner.go:164] Run: docker container inspect multinode-860188-m03 --format={{.State.Status}}
	I0930 11:15:24.455749  695172 status.go:364] multinode-860188-m03 host status = "Stopped" (err=<nil>)
	I0930 11:15:24.455771  695172 status.go:377] host is not running, skipping remaining checks
	I0930 11:15:24.455799  695172 status.go:176] multinode-860188-m03 status: &{Name:multinode-860188-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-860188 node start m03 -v=7 --alsologtostderr: (8.983787643s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (105.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-860188
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-860188
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-860188: (24.820130611s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-860188 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-860188 --wait=true -v=8 --alsologtostderr: (1m20.725773087s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-860188
--- PASS: TestMultiNode/serial/RestartKeepsNodes (105.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-860188 node delete m03: (4.887108155s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-860188 stop: (23.687127889s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-860188 status: exit status 7 (95.81028ms)

                                                
                                                
-- stdout --
	multinode-860188
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-860188-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-860188 status --alsologtostderr: exit status 7 (100.401711ms)

                                                
                                                
-- stdout --
	multinode-860188
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-860188-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:17:49.206579  702952 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:17:49.206706  702952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:17:49.206717  702952 out.go:358] Setting ErrFile to fd 2...
	I0930 11:17:49.206723  702952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:17:49.206973  702952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
	I0930 11:17:49.207160  702952 out.go:352] Setting JSON to false
	I0930 11:17:49.207195  702952 mustload.go:65] Loading cluster: multinode-860188
	I0930 11:17:49.207302  702952 notify.go:220] Checking for updates...
	I0930 11:17:49.207657  702952 config.go:182] Loaded profile config "multinode-860188": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:17:49.207678  702952 status.go:174] checking status of multinode-860188 ...
	I0930 11:17:49.208574  702952 cli_runner.go:164] Run: docker container inspect multinode-860188 --format={{.State.Status}}
	I0930 11:17:49.226269  702952 status.go:364] multinode-860188 host status = "Stopped" (err=<nil>)
	I0930 11:17:49.226293  702952 status.go:377] host is not running, skipping remaining checks
	I0930 11:17:49.226300  702952 status.go:176] multinode-860188 status: &{Name:multinode-860188 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 11:17:49.226336  702952 status.go:174] checking status of multinode-860188-m02 ...
	I0930 11:17:49.226670  702952 cli_runner.go:164] Run: docker container inspect multinode-860188-m02 --format={{.State.Status}}
	I0930 11:17:49.257024  702952 status.go:364] multinode-860188-m02 host status = "Stopped" (err=<nil>)
	I0930 11:17:49.257053  702952 status.go:377] host is not running, skipping remaining checks
	I0930 11:17:49.257060  702952 status.go:176] multinode-860188-m02 status: &{Name:multinode-860188-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-860188 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0930 11:18:13.817249  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-860188 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (46.403176315s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-860188 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-860188
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-860188-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-860188-m02 --driver=docker  --container-runtime=crio: exit status 14 (92.891346ms)

                                                
                                                
-- stdout --
	* [multinode-860188-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-860188-m02' is duplicated with machine name 'multinode-860188-m02' in profile 'multinode-860188'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-860188-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-860188-m03 --driver=docker  --container-runtime=crio: (31.048713007s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-860188
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-860188: exit status 80 (338.597619ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-860188 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-860188-m03 already exists in multinode-860188-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_5.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-860188-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-860188-m03: (1.928078988s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.47s)

                                                
                                    
x
+
TestPreload (126.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-626741 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0930 11:19:50.751712  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-626741 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m34.939742126s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-626741 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-626741 image pull gcr.io/k8s-minikube/busybox: (3.016134257s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-626741
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-626741: (5.815803054s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-626741 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-626741 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.100722301s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-626741 image list
helpers_test.go:175: Cleaning up "test-preload-626741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-626741
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-626741: (2.377161651s)
--- PASS: TestPreload (126.52s)

                                                
                                    
x
+
TestScheduledStopUnix (105.59s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-461562 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-461562 --memory=2048 --driver=docker  --container-runtime=crio: (29.097653001s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-461562 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-461562 -n scheduled-stop-461562
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-461562 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0930 11:21:49.820843  575428 retry.go:31] will retry after 142.234µs: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.824684  575428 retry.go:31] will retry after 205.011µs: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.825819  575428 retry.go:31] will retry after 219.705µs: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.826949  575428 retry.go:31] will retry after 291.07µs: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.828072  575428 retry.go:31] will retry after 700.961µs: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.829189  575428 retry.go:31] will retry after 591.205µs: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.830310  575428 retry.go:31] will retry after 615.558µs: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.831377  575428 retry.go:31] will retry after 2.000117ms: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.833567  575428 retry.go:31] will retry after 2.907786ms: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.836773  575428 retry.go:31] will retry after 2.711924ms: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.842022  575428 retry.go:31] will retry after 7.919045ms: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.850273  575428 retry.go:31] will retry after 4.564851ms: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.855501  575428 retry.go:31] will retry after 18.071158ms: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.873679  575428 retry.go:31] will retry after 16.188671ms: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
I0930 11:21:49.890960  575428 retry.go:31] will retry after 33.804513ms: open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/scheduled-stop-461562/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-461562 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-461562 -n scheduled-stop-461562
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-461562
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-461562 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0930 11:22:53.818754  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-461562
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-461562: exit status 7 (72.177566ms)

                                                
                                                
-- stdout --
	scheduled-stop-461562
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-461562 -n scheduled-stop-461562
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-461562 -n scheduled-stop-461562: exit status 7 (72.856189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-461562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-461562
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-461562: (4.966118723s)
--- PASS: TestScheduledStopUnix (105.59s)

                                                
                                    
x
+
TestInsufficientStorage (10.18s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-111724 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-111724 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.738311294s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a4b2bfb3-b6af-4e1d-b44c-baa99f101f7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-111724] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"038dc547-aae1-4298-b5ea-aab55c712087","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19734"}}
	{"specversion":"1.0","id":"6a7df6fe-2447-43ac-9e12-cf807e72dc0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bd523086-bc99-421a-ae81-d2fb2f99f97c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig"}}
	{"specversion":"1.0","id":"95cb2250-477f-41bf-a67f-b0d2f4607cdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube"}}
	{"specversion":"1.0","id":"6a2d59ec-9750-443a-bf44-9ee895f1b12a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4ff240a3-d7e6-41f2-bbf8-91e4b68dd095","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"aef97f49-3988-4ecd-9913-8fa8ab416f90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"66df3bd2-6426-4d21-97ff-24b5eff79589","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"813b4d16-36f9-4d0f-ac6e-5d98a6901414","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"35fddd0c-390b-4058-a3de-961cae26340c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"89fb4ee0-c12f-4bb5-b726-267fa92810de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-111724\" primary control-plane node in \"insufficient-storage-111724\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ea409bd6-cbae-4ba3-a55e-a5a5c4a54a7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d8b91ea-62cb-4217-bafa-c160864afdd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6f721089-d339-4ce3-8a15-604f16894f1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-111724 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-111724 --output=json --layout=cluster: exit status 7 (268.045473ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-111724","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-111724","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:23:13.796006  720336 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-111724" does not appear in /home/jenkins/minikube-integration/19734-570035/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-111724 --output=json --layout=cluster
E0930 11:23:13.817823  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-111724 --output=json --layout=cluster: exit status 7 (285.153393ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-111724","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-111724","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:23:14.082206  720396 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-111724" does not appear in /home/jenkins/minikube-integration/19734-570035/kubeconfig
	E0930 11:23:14.092244  720396 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/insufficient-storage-111724/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-111724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-111724
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-111724: (1.882856676s)
--- PASS: TestInsufficientStorage (10.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.62s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4227122282 start -p running-upgrade-531189 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4227122282 start -p running-upgrade-531189 --memory=2200 --vm-driver=docker  --container-runtime=crio: (37.629494006s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-531189 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0930 11:28:13.817210  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-531189 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.897959085s)
helpers_test.go:175: Cleaning up "running-upgrade-531189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-531189
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-531189: (3.468709303s)
--- PASS: TestRunningBinaryUpgrade (67.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (395.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-287551 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-287551 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m10.986470911s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-287551
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-287551: (3.371645166s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-287551 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-287551 status --format={{.Host}}: exit status 7 (95.585734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-287551 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-287551 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.500367818s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-287551 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-287551 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-287551 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (98.355753ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-287551] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-287551
	    minikube start -p kubernetes-upgrade-287551 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2875512 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-287551 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-287551 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-287551 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.415370211s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-287551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-287551
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-287551: (2.277454974s)
--- PASS: TestKubernetesUpgrade (395.92s)

                                                
                                    
x
+
TestMissingContainerUpgrade (166.34s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2331253645 start -p missing-upgrade-363521 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2331253645 start -p missing-upgrade-363521 --memory=2200 --driver=docker  --container-runtime=crio: (1m28.938061398s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-363521
E0930 11:24:50.750678  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-363521: (10.444594582s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-363521
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-363521 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-363521 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.19340365s)
helpers_test.go:175: Cleaning up "missing-upgrade-363521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-363521
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-363521: (2.025094375s)
--- PASS: TestMissingContainerUpgrade (166.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-424902 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-424902 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (91.823373ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-424902] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-424902 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-424902 --driver=docker  --container-runtime=crio: (39.414631611s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-424902 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-424902 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-424902 --no-kubernetes --driver=docker  --container-runtime=crio: (5.410542137s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-424902 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-424902 status -o json: exit status 2 (411.470863ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-424902","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-424902
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-424902: (2.157982421s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-424902 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-424902 --no-kubernetes --driver=docker  --container-runtime=crio: (7.273685189s)
--- PASS: TestNoKubernetes/serial/Start (7.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-424902 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-424902 "sudo systemctl is-active --quiet service kubelet": exit status 1 (302.495379ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-424902
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-424902: (1.255449307s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-424902 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-424902 --driver=docker  --container-runtime=crio: (7.144966584s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-424902 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-424902 "sudo systemctl is-active --quiet service kubelet": exit status 1 (338.187236ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (76.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2574284287 start -p stopped-upgrade-391697 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2574284287 start -p stopped-upgrade-391697 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.839649437s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2574284287 -p stopped-upgrade-391697 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2574284287 -p stopped-upgrade-391697 stop: (2.76227086s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-391697 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-391697 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.953293338s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (76.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-391697
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-391697: (1.168378375s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestPause/serial/Start (49.6s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-662856 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-662856 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (49.599999843s)
--- PASS: TestPause/serial/Start (49.60s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.33s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-662856 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-662856 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.314379239s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.33s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-662856 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-662856 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-662856 --output=json --layout=cluster: exit status 2 (305.023734ms)

                                                
                                                
-- stdout --
	{"Name":"pause-662856","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-662856","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-662856 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-662856 --alsologtostderr -v=5
E0930 11:29:50.750813  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-662856 --alsologtostderr -v=5: (1.103319324s)
--- PASS: TestPause/serial/PauseAgain (1.10s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.67s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-662856 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-662856 --alsologtostderr -v=5: (2.674300714s)
--- PASS: TestPause/serial/DeletePaused (2.67s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-662856
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-662856: exit status 1 (17.797763ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-662856: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-513160 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-513160 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (175.495414ms)

                                                
                                                
-- stdout --
	* [false-513160] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:30:39.345719  760035 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:30:39.345932  760035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:30:39.345947  760035 out.go:358] Setting ErrFile to fd 2...
	I0930 11:30:39.345953  760035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:30:39.346229  760035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-570035/.minikube/bin
	I0930 11:30:39.346703  760035 out.go:352] Setting JSON to false
	I0930 11:30:39.347800  760035 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":126786,"bootTime":1727569054,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0930 11:30:39.347880  760035 start.go:139] virtualization:  
	I0930 11:30:39.351239  760035 out.go:177] * [false-513160] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 11:30:39.354835  760035 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:30:39.355013  760035 notify.go:220] Checking for updates...
	I0930 11:30:39.360350  760035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:30:39.362954  760035 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-570035/kubeconfig
	I0930 11:30:39.365488  760035 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-570035/.minikube
	I0930 11:30:39.368167  760035 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 11:30:39.370756  760035 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:30:39.373979  760035 config.go:182] Loaded profile config "kubernetes-upgrade-287551": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:39.374136  760035 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:30:39.407424  760035 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 11:30:39.407573  760035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 11:30:39.456661  760035 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 11:30:39.446261604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 11:30:39.456777  760035 docker.go:318] overlay module found
	I0930 11:30:39.459591  760035 out.go:177] * Using the docker driver based on user configuration
	I0930 11:30:39.462187  760035 start.go:297] selected driver: docker
	I0930 11:30:39.462203  760035 start.go:901] validating driver "docker" against <nil>
	I0930 11:30:39.462217  760035 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:30:39.465457  760035 out.go:201] 
	W0930 11:30:39.468186  760035 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0930 11:30:39.470880  760035 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-513160 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-513160

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-513160

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-513160

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-513160

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-513160

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-513160

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-513160

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-513160

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-513160

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-513160

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-513160

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-513160" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-513160" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:30:24 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-287551
contexts:
- context:
cluster: kubernetes-upgrade-287551
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:30:24 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-287551
name: kubernetes-upgrade-287551
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-287551
user:
client-certificate: /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/kubernetes-upgrade-287551/client.crt
client-key: /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/kubernetes-upgrade-287551/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-513160

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-513160"

                                                
                                                
----------------------- debugLogs end: false-513160 [took: 4.595002193s] --------------------------------
helpers_test.go:175: Cleaning up "false-513160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-513160
--- PASS: TestNetworkPlugins/group/false (4.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (164.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-186870 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0930 11:33:13.817270  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-186870 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m44.19143149s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (164.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-186870 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f38f8b57-121d-4e85-8a67-0f53e99b5d7f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f38f8b57-121d-4e85-8a67-0f53e99b5d7f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005391635s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-186870 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-513240 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-513240 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m5.323475243s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-186870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-186870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.488324811s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-186870 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-186870 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-186870 --alsologtostderr -v=3: (14.495044251s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-186870 -n old-k8s-version-186870
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-186870 -n old-k8s-version-186870: exit status 7 (88.042735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-186870 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (147.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-186870 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-186870 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m26.865853472s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-186870 -n old-k8s-version-186870
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (147.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-513240 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c59553b3-a54c-428a-8f58-8585776932be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c59553b3-a54c-428a-8f58-8585776932be] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003224246s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-513240 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-513240 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-513240 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.512028164s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-513240 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-513240 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-513240 --alsologtostderr -v=3: (12.236678558s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-513240 -n no-preload-513240
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-513240 -n no-preload-513240: exit status 7 (73.04185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-513240 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-513240 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-513240 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m27.35271721s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-513240 -n no-preload-513240
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dzr7g" [f36bcc4b-7a18-426c-83da-88d9252e1be9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005114324s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dzr7g" [f36bcc4b-7a18-426c-83da-88d9252e1be9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003743747s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-186870 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-186870 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-186870 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-186870 -n old-k8s-version-186870
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-186870 -n old-k8s-version-186870: exit status 2 (331.765033ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-186870 -n old-k8s-version-186870
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-186870 -n old-k8s-version-186870: exit status 2 (300.08974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-186870 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-186870 -n old-k8s-version-186870
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-186870 -n old-k8s-version-186870
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-392405 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0930 11:38:13.817383  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-392405 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m20.839548468s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-392405 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [15e87e34-425e-4022-a116-534d8dc5fa7e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [15e87e34-425e-4022-a116-534d8dc5fa7e] Running
E0930 11:39:33.820496  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 13.003354611s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-392405 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-392405 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-392405 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-392405 --alsologtostderr -v=3
E0930 11:39:50.751568  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-392405 --alsologtostderr -v=3: (12.001375552s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-392405 -n embed-certs-392405
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-392405 -n embed-certs-392405: exit status 7 (71.273735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-392405 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (275.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-392405 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0930 11:39:53.963558  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:39:53.969988  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:39:53.981380  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:39:54.003703  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:39:54.045146  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:39:54.126689  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:39:54.288228  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:39:54.609920  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:39:55.251735  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:39:56.533237  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:39:59.094868  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:40:04.217489  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:40:14.459168  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:40:34.941334  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-392405 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m35.031304809s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-392405 -n embed-certs-392405
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (275.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hgl2w" [7c12287d-665b-4c0e-a842-d9e1b55fa162] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004153936s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hgl2w" [7c12287d-665b-4c0e-a842-d9e1b55fa162] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003283784s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-513240 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-513240 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-513240 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-513240 -n no-preload-513240
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-513240 -n no-preload-513240: exit status 2 (330.02992ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-513240 -n no-preload-513240
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-513240 -n no-preload-513240: exit status 2 (320.907235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-513240 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-513240 -n no-preload-513240
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-513240 -n no-preload-513240
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-069930 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0930 11:41:15.903347  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-069930 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m19.203493037s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-069930 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [14deb945-3e31-42a8-bf18-aa65c0e116d7] Pending
helpers_test.go:344: "busybox" [14deb945-3e31-42a8-bf18-aa65c0e116d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [14deb945-3e31-42a8-bf18-aa65c0e116d7] Running
E0930 11:42:37.825533  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.00366021s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-069930 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-069930 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-069930 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-069930 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-069930 --alsologtostderr -v=3: (12.011694834s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-069930 -n default-k8s-diff-port-069930
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-069930 -n default-k8s-diff-port-069930: exit status 7 (71.849432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-069930 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-069930 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0930 11:43:13.817117  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-069930 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m26.733236986s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-069930 -n default-k8s-diff-port-069930
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nrgpr" [e02e5870-a2f0-409d-86f2-f981c6e2cee8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00367812s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nrgpr" [e02e5870-a2f0-409d-86f2-f981c6e2cee8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004721173s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-392405 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-392405 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-392405 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-392405 -n embed-certs-392405
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-392405 -n embed-certs-392405: exit status 2 (298.714724ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-392405 -n embed-certs-392405
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-392405 -n embed-certs-392405: exit status 2 (304.572346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-392405 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-392405 -n embed-certs-392405
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-392405 -n embed-certs-392405
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-986034 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0930 11:44:50.751514  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:44:53.963896  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-986034 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (33.924716083s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-986034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-986034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.006618958s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-986034 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-986034 --alsologtostderr -v=3: (1.219985645s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-986034 -n newest-cni-986034
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-986034 -n newest-cni-986034: exit status 7 (67.681773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-986034 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-986034 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0930 11:45:21.668996  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-986034 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (15.226191981s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-986034 -n newest-cni-986034
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-986034 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-986034 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-986034 -n newest-cni-986034
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-986034 -n newest-cni-986034: exit status 2 (309.95491ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-986034 -n newest-cni-986034
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-986034 -n newest-cni-986034: exit status 2 (310.096799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-986034 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-986034 -n newest-cni-986034
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-986034 -n newest-cni-986034
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (49.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0930 11:46:03.803871  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:46:03.810238  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:46:03.821711  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:46:03.843077  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:46:03.884400  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:46:03.965853  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:46:04.127374  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:46:04.449371  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:46:05.090766  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:46:06.372222  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:46:08.933692  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:46:14.055768  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:46:24.297496  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (49.340044318s)
--- PASS: TestNetworkPlugins/group/auto/Start (49.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-513160 "pgrep -a kubelet"
I0930 11:46:32.325665  575428 config.go:182] Loaded profile config "auto-513160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-513160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4hxxv" [13ae623f-c1be-4508-8231-ddcf9237cf9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4hxxv" [13ae623f-c1be-4508-8231-ddcf9237cf9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003399926s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-513160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (76.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m16.013684873s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (76.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-g9m8l" [4753d508-59fd-422a-b6ea-74895406b4c5] Running
E0930 11:47:25.740897  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004609754s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-g9m8l" [4753d508-59fd-422a-b6ea-74895406b4c5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004144844s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-069930 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-069930 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-069930 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-069930 --alsologtostderr -v=1: (1.306183329s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-069930 -n default-k8s-diff-port-069930
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-069930 -n default-k8s-diff-port-069930: exit status 2 (494.883052ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-069930 -n default-k8s-diff-port-069930
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-069930 -n default-k8s-diff-port-069930: exit status 2 (369.313517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-069930 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-069930 -n default-k8s-diff-port-069930
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-069930 -n default-k8s-diff-port-069930
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0930 11:47:56.888303  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:48:13.817182  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/functional-300388/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m0.853590086s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lzpjp" [16e4742a-db96-4647-95c5-45671d22ca38] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003389835s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-513160 "pgrep -a kubelet"
I0930 11:48:26.537448  575428 config.go:182] Loaded profile config "kindnet-513160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-513160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rtj9b" [16a70b34-ef6d-4981-8af8-3f6690b84f24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rtj9b" [16a70b34-ef6d-4981-8af8-3f6690b84f24] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003652342s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-513160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rpqqb" [68c2d77e-82c0-4fc3-9c57-3852e6f00b86] Running
E0930 11:48:47.663439  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004752935s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-513160 "pgrep -a kubelet"
I0930 11:48:49.506105  575428 config.go:182] Loaded profile config "calico-513160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-513160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-znbqd" [bf0542ec-9add-434c-bba0-00f819029ce1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-znbqd" [bf0542ec-9add-434c-bba0-00f819029ce1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004831442s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.322333007s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-513160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0930 11:49:50.751195  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/addons-718366/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:49:53.963846  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/old-k8s-version-186870/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m18.243034777s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-513160 "pgrep -a kubelet"
I0930 11:50:04.868601  575428 config.go:182] Loaded profile config "custom-flannel-513160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-513160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nmswp" [4853f4bc-6b5c-4b37-b69d-af23d9ba3cbe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nmswp" [4853f4bc-6b5c-4b37-b69d-af23d9ba3cbe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004076824s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-513160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (59.763645731s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-513160 "pgrep -a kubelet"
I0930 11:50:47.146488  575428 config.go:182] Loaded profile config "enable-default-cni-513160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-513160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zbsxd" [0133d8a9-a275-4539-b83d-d2b9cc837f68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zbsxd" [0133d8a9-a275-4539-b83d-d2b9cc837f68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.005754539s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-513160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0930 11:51:31.505544  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/no-preload-513240/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:51:32.561090  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:51:32.567532  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:51:32.578884  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:51:32.600288  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:51:32.641993  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:51:32.724153  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:51:32.885793  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:51:33.207757  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:51:33.849738  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:51:35.131147  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-513160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (42.44369169s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-r5zvf" [070b1d8b-31f8-46da-bab0-3bf5b54be618] Running
E0930 11:51:37.692521  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:51:42.814198  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005961374s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-513160 "pgrep -a kubelet"
I0930 11:51:43.528685  575428 config.go:182] Loaded profile config "flannel-513160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-513160 replace --force -f testdata/netcat-deployment.yaml
I0930 11:51:43.856141  575428 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8hgcz" [149903b5-d389-41a8-a223-541b09ffaf29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8hgcz" [149903b5-d389-41a8-a223-541b09ffaf29] Running
E0930 11:51:53.056008  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.0042898s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-513160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-513160 "pgrep -a kubelet"
I0930 11:52:06.235068  575428 config.go:182] Loaded profile config "bridge-513160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-513160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-grbbt" [ffe35ff4-ed4f-4631-9e27-ce9950ed8431] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-grbbt" [ffe35ff4-ed4f-4631-9e27-ce9950ed8431] Running
E0930 11:52:13.538115  575428 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/auto-513160/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003992803s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-513160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-513160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (29/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-121895 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-121895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-121895
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-705119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-705119
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-513160 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-513160

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-513160

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-513160

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-513160

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-513160

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-513160

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-513160

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-513160

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-513160

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-513160

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-513160

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-513160" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-513160" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:30:24 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-287551
contexts:
- context:
cluster: kubernetes-upgrade-287551
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:30:24 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-287551
name: kubernetes-upgrade-287551
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-287551
user:
client-certificate: /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/kubernetes-upgrade-287551/client.crt
client-key: /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/kubernetes-upgrade-287551/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-513160

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-513160"

                                                
                                                
----------------------- debugLogs end: kubenet-513160 [took: 3.358407093s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-513160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-513160
--- SKIP: TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-513160 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-513160" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19734-570035/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:30:24 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-287551
contexts:
- context:
cluster: kubernetes-upgrade-287551
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:30:24 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-287551
name: kubernetes-upgrade-287551
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-287551
user:
client-certificate: /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/kubernetes-upgrade-287551/client.crt
client-key: /home/jenkins/minikube-integration/19734-570035/.minikube/profiles/kubernetes-upgrade-287551/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-513160

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-513160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-513160"

                                                
                                                
----------------------- debugLogs end: cilium-513160 [took: 4.0194004s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-513160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-513160
--- SKIP: TestNetworkPlugins/group/cilium (4.20s)

                                                
                                    
Copied to clipboard